A chaos detectable and time step-size adaptive numerical scheme for nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
Chen, Yung-Wei; Liu, Chein-Shan; Chang, Jiang-Ren
2007-02-01
The first step in investigation the dynamics of a continuous time system described by ordinary differential equations is to integrate them to obtain trajectories. In this paper, we convert the group-preserving scheme (GPS) developed by Liu [International Journal of Non-Linear Mechanics 36 (2001) 1047-1068] to a time step-size adaptive scheme, x=x+hf(x,t), where x∈R is the system variables we are concerned with, and f(x,t)∈R is a time-varying vector field. The scheme has the form similar to the Euler scheme, x=x+Δtf(x,t), but our step-size h is adaptive automatically. Very interestingly, the ratio h/Δt, which we call the adaptive factor, can forecast the appearance of chaos if the considered dynamical system becomes chaotical. The numerical examples of the Duffing equation, the Lorenz equation and the Rossler equation, which may exhibit chaotic behaviors under certain parameters values, are used to demonstrate these phenomena. Two other non-chaotic examples are included to compare the performance of the GPS and the adaptive one.
Adaptive time steps in trajectory surface hopping simulations
NASA Astrophysics Data System (ADS)
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
An adaptive time-stepping strategy for solving the phase field crystal model
Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.
Convergence Acceleration for Multistage Time-Stepping Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli L.; Rossow, C-C; Vasta, V. N.
2006-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 could be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. Numerical dissipation operators (based on the Roe scheme, a matrix formulation, and the CUSP scheme) as well as the number of RK stages are considered in evaluating the RK/implicit scheme. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. In two dimensions, turbulent flows over an airfoil at subsonic and transonic conditions are computed. The effects of mesh cell aspect ratio on convergence are investigated for Reynolds numbers between 5.7 x 10(exp 6) and 100.0 x 10(exp 6). Results are also obtained for a transonic wing flow. For both 2-D and 3-D problems, the computational time of a well-tuned standard RK scheme is reduced at least a factor of four.
A new adaptive time step method for unsteady flow simulations in a human lung.
Fenández-Tena, Ana; Marcos, Alfonso C; Martínez, Cristina; Keith Walters, D
2017-04-07
The innovation presented is a method for adaptive time-stepping that allows clustering of time steps in portions of the cycle for which flow variables are rapidly changing, based on the concept of using a uniform step in a relevant dependent variable rather than a uniform step in the independent variable time. A user-defined function was developed to adapt the magnitude of the time step (adaptive time step) to a defined rate of change in inlet velocity. Quantitative comparison indicates that the new adaptive time stepping method significantly improves accuracy for simulations using an equivalent number of time steps per cycle.
On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2012-08-01
A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.
A class of large time step Godunov schemes for hyperbolic conservation laws and applications
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2011-08-01
A large time step (LTS) Godunov scheme firstly proposed by LeVeque is further developed in the present work and applied to Euler equations. Based on the analysis of the computational performances of LeVeque's linear approximation on wave interactions, a multi-wave approximation on rarefaction fan is proposed to avoid the occurrences of rarefaction shocks in computations. The developed LTS scheme is validated using 1-D test cases, manifesting high resolution for discontinuities and the capability of maintaining computational stability when large CFL numbers are imposed. The scheme is then extended to multidimensional problems using dimensional splitting technique; the treatment of boundary condition for this multidimensional LTS scheme is also proposed. As for demonstration problems, inviscid flows over NACA0012 airfoil and ONERA M6 wing with given swept angle are simulated using the developed LTS scheme. The numerical results reveal the high resolution nature of the scheme, where the shock can be captured within 1-2 grid points. The resolution of the scheme would improve gradually along with the increasing of CFL number under an upper bound where the solution becomes severely oscillating across the shock. Computational efficiency comparisons show that the developed scheme is capable of reducing the computational time effectively with increasing the time step (CFL number).
Explicit large time-step schemes for the shallow water equations
NASA Technical Reports Server (NTRS)
Turkel, E.; Zwas, G.
1979-01-01
Modifications to explicit finite difference schemes for solving the shallow water equations for meteorological applications by increasing the time step for the fast gravity waves are analyzed. Terms associated with the gravity waves in the shallow water equations are treated on a coarser grid than those associated with the slow Rossby waves, which contain much more of the available energy and must be treated with higher accuracy, enabling a several-fold increase in time step without degrading the accuracy of the solution. The method is presented in Cartesian and spherical coordinates for a rotating earth, using generalized leapfrog, frozen coefficient, and Fourier filtering finite difference schemes. Computational results verify the numerical stability of the approach.
NASA Technical Reports Server (NTRS)
Mohan, Ram V.; Tamma, Kumar K.
1993-01-01
An adaptive time stepping strategy for transient thermal analysis of engineering systems is described which computes the time step based on the local truncation error with a good global error control and obtains optimal time steps to be used during the analysis. Combined mesh partitionings involving FEM/FVM meshes based on physical situations to obtain numerically improved physical representations are also proposed. Numerical test cases are described and comparative pros and cons are identified for practical situations.
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.
NASA Astrophysics Data System (ADS)
Gupta, Shubhangi; Wohlmuth, Barbara; Helmig, Rainer
2016-05-01
We present an extrapolation-based semi-implicit multi-rate time stepping (MRT) scheme and a compound-fast MRT scheme for a naturally partitioned, multi-time-scale hydro-geomechanical hydrate reservoir model. We evaluate the performance of the two MRT methods compared to an iteratively coupled solution scheme and discuss their advantages and disadvantages. The performance of the two MRT methods is evaluated in terms of speed-up and accuracy by comparison to an iteratively coupled solution scheme. We observe that the extrapolation-based semi-implicit method gives a higher speed-up but is strongly dependent on the relative time scales of the latent (slow) and active (fast) components. On the other hand, the compound-fast method is more robust and less sensitive to the relative time scales, but gives lower speed up as compared to the semi-implicit method, especially when the relative time scales of the active and latent components are comparable.
An Adaptive Fourier Filter for Relaxing Time Stepping Constraints for Explicit Solvers
Gelb, Anne; Archibald, Richard K
2015-01-01
Filtering is necessary to stabilize piecewise smooth solutions. The resulting diffusion stabilizes the method, but may fail to resolve the solution near discontinuities. Moreover, high order filtering still requires cost prohibitive time stepping. This paper introduces an adaptive filter that controls spurious modes of the solution, but is not unnecessarily diffusive. Consequently we are able to stabilize the solution with larger time steps, but also take advantage of the accuracy of a high order filter.
Multi time-step wavefront reconstruction for tomographic adaptive-optics systems.
Ono, Yoshito H; Akiyama, Masayuki; Oya, Shin; Lardiére, Olivier; Andersen, David R; Correia, Carlos; Jackson, Kate; Bradley, Colin
2016-04-01
In tomographic adaptive-optics (AO) systems, errors due to tomographic wavefront reconstruction limit the performance and angular size of the scientific field of view (FoV), where AO correction is effective. We propose a multi time-step tomographic wavefront reconstruction method to reduce the tomographic error by using measurements from both the current and previous time steps simultaneously. We further outline the method to feed the reconstructor with both wind speed and direction of each turbulence layer. An end-to-end numerical simulation, assuming a multi-object AO (MOAO) system on a 30 m aperture telescope, shows that the multi time-step reconstruction increases the Strehl ratio (SR) over a scientific FoV of 10 arc min in diameter by a factor of 1.5-1.8 when compared to the classical tomographic reconstructor, depending on the guide star asterism and with perfect knowledge of wind speeds and directions. We also evaluate the multi time-step reconstruction method and the wind estimation method on the RAVEN demonstrator under laboratory setting conditions. The wind speeds and directions at multiple atmospheric layers are measured successfully in the laboratory experiment by our wind estimation method with errors below 2 ms^{-1}. With these wind estimates, the multi time-step reconstructor increases the SR value by a factor of 1.2-1.5, which is consistent with a prediction from the end-to-end numerical simulation.
An implicit time-stepping scheme for rigid body dynamics with Coulomb friction
STEWART,DAVID; TRINKLE,JEFFREY C.
2000-02-15
In this paper a new time-stepping method for simulating systems of rigid bodies is given. Unlike methods which take an instantaneous point of view, the method is based on impulse-momentum equations, and so does not need to explicitly resolve impulsive forces. On the other hand, the method is distinct from previous impulsive methods in that it does not require explicit collision checking and it can handle simultaneous impacts. Numerical results are given for one planar and one three-dimensional example, which demonstrate the practicality of the method, and its convergence as the step size becomes small.
NASA Astrophysics Data System (ADS)
Kleiber, R.; Hatzky, R.; Könies, A.; Mishchenko, A.; Sonnendrücker, E.
2016-03-01
A new algorithm for electromagnetic gyrokinetic simulations, the so called "pullback transformation scheme" proposed by Mishchenko et al. [Phys. Plasmas 21, 092110 (2014)] is motivated as an explicit time integrator reset after each full timestep and investigated in detail. Using a numerical dispersion relation valid in slab geometry, it is shown that the linear properties of the scheme are comparable to those of an implicit v∥ -scheme. A nonlinear extension of the mixed variable formulation, derived consistently from a field Lagrangian, is proposed. The scheme shows excellent numerical properties with a low statistical noise level and a large time step especially for MHD modes. The example of a nonlinear slab tearing mode simulation is used to illustrate the properties of different formulations of the physical model equations.
Implicit schemes with large time step for non-linear equations: application to river flow hydraulics
NASA Astrophysics Data System (ADS)
Burguete, J.; García-Navarro, P.
2004-10-01
In this work, first-order upwind implicit schemes are considered. The traditional tridiagonal scheme is rewritten as a sum of two bidiagonal schemes in order to produce a simpler method better suited for unsteady transcritical flows. On the other hand, the origin of the instabilities associated to the use of upwind implicit methods for shock propagations is identified and a new stability condition for non-linear problems is proposed. This modification produces a robust, simple and accurate upwind semi-explicit scheme suitable for discontinuous flows with high Courant-Friedrichs-Lewy (CFL) numbers.The discretization at the boundaries is based on the condition of global mass conservation thus enabling a fully conservative solution for all kind of boundary conditions.The performance of the proposed technique will be shown in the solution of the inviscid Burgers' equation, in an ideal dambreak test case, in some steady open channel flow test cases with analytical solution and in a realistic flood routing problem, where stable and accurate solutions will be presented using CFL values up to 100.
Convergence of a Time-Stepping Scheme for Rigid-Body Dynamics and Resolution of Painlevé's Problem
NASA Astrophysics Data System (ADS)
Stewart, David E.
This paper gives convergence theory for a new implicit time-stepping scheme for general rigid-body dynamics with Coulomb friction and purely inelastic collisions and shocks. An important consequence of this work is the proof of existence of solutions of rigid-body problems which include the famous counterexamples of Painlevé. The mathematical basis for this work is the formulation of the rigid-body problem in terms of measure differential inclusions of Moreau and Monteiro Marques. The implicit time-stepping method is based on complementarity problems, and is essentially a particular case of the algorithm described in Anitescu & Potra [2], which in turn is based on the formulation in Stewart & Trinkle [47].
NASA Astrophysics Data System (ADS)
Cavalcanti, José Rafael; Dumbser, Michael; Motta-Marques, David da; Fragoso Junior, Carlos Ruberto
2015-12-01
In this article we propose a new conservative high resolution TVD (total variation diminishing) finite volume scheme with time-accurate local time stepping (LTS) on unstructured grids for the solution of scalar transport problems, which are typical in the context of water quality simulations. To keep the presentation of the new method as simple as possible, the algorithm is only derived in two space dimensions and for purely convective transport problems, hence neglecting diffusion and reaction terms. The new numerical method for the solution of the scalar transport is directly coupled to the hydrodynamic model of Casulli and Walters (2000) that provides the dynamics of the free surface and the velocity vector field based on a semi-implicit discretization of the shallow water equations. Wetting and drying is handled rigorously by the nonlinear algorithm proposed by Casulli (2009). The new time-accurate LTS algorithm allows a different time step size for each element of the unstructured grid, based on an element-local Courant-Friedrichs-Lewy (CFL) stability condition. The proposed method does not need any synchronization between different time steps of different elements and is by construction locally and globally conservative. The LTS scheme is based on a piecewise linear polynomial reconstruction in space-time using the MUSCL-Hancock method, to obtain second order of accuracy in both space and time. The new algorithm is first validated on some classical test cases for pure advection problems, for which exact solutions are known. In all cases we obtain a very good level of accuracy, showing also numerical convergence results; we furthermore confirm mass conservation up to machine precision and observe an improved computational efficiency compared to a standard second order TVD scheme for scalar transport with global time stepping (GTS). Then, the new LTS method is applied to some more complex problems, where the new scalar transport scheme has also been coupled to
Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes
Lu, S.
2002-07-01
As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent of this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)
Gavrea, B. I.; Anitescu, M.; Potra, F. A.; Mathematics and Computer Science; Univ. of Pennsylvania; Univ. of Maryland
2008-01-01
In this work we present a framework for the convergence analysis in a measure differential inclusion sense of a class of time-stepping schemes for multibody dynamics with contacts, joints, and friction. This class of methods solves one linear complementarity problem per step and contains the semi-implicit Euler method, as well as trapezoidal-like methods for which second-order convergence was recently proved under certain conditions. By using the concept of a reduced friction cone, the analysis includes, for the first time, a convergence result for the case that includes joints. An unexpected intermediary result is that we are able to define a discrete velocity function of bounded variation, although the natural discrete velocity function produced by our algorithm may have unbounded variation.
NASA Astrophysics Data System (ADS)
Hejazialhosseini, Babak; Rossinelli, Diego; Bergdorf, Michael; Koumoutsakos, Petros
2010-11-01
We present a space-time adaptive solver for single- and multi-phase compressible flows that couples average interpolating wavelets with high-order finite volume schemes. The solver introduces the concept of wavelet blocks, handles large jumps in resolution and employs local time-stepping for efficient time integration. We demonstrate that the inherently sequential wavelet-based adaptivity can be implemented efficiently in multicore computer architectures using task-based parallelism and introducing the concept of wavelet blocks. We validate our computational method on a number of benchmark problems and we present simulations of shock-bubble interaction at different Mach numbers, demonstrating the accuracy and computational performance of the method.
Numerical simulation of diffusion MRI signals using an adaptive time-stepping method
NASA Astrophysics Data System (ADS)
Li, Jing-Rebecca; Calhoun, Donna; Poupon, Cyril; Le Bihan, Denis
2014-01-01
The effect on the MRI signal of water diffusion in biological tissues in the presence of applied magnetic field gradient pulses can be modelled by a multiple compartment Bloch-Torrey partial differential equation. We present a method for the numerical solution of this equation by coupling a standard Cartesian spatial discretization with an adaptive time discretization. The time discretization is done using the explicit Runge-Kutta-Chebyshev method, which is more efficient than the forward Euler time discretization for diffusive-type problems. We use this approach to simulate the diffusion MRI signal from the extra-cylindrical compartment in a tissue model of the brain gray matter consisting of cylindrical and spherical cells and illustrate the effect of cell membrane permeability.
Tavakoli, Rouhollah
2016-01-01
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.
Newmark local time stepping on high-performance computing architectures
NASA Astrophysics Data System (ADS)
Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf
2017-04-01
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.
Crowder, D W; Onstad, D W
2005-04-01
We expanded a simulation model of the population dynamics and genetics of the western corn rootworm for a landscape of corn, soybean, and other crops to study the simultaneous development of resistance to both crop rotation and transgenic corn. Transgenic corn effective against corn rootworm was recently approved in 2003 and may be a very effective new technology for control of western corn rootworm in areas with or without the rotation-resistant variant. In simulations of areas with rotation-resistant populations, planting transgenic corn to only rotated cornfields was a robust strategy to prevent resistance to both traits. In these areas, planting transgenic corn to only continuous fields was not an effective strategy for preventing adaptation to crop rotation or transgenic corn. In areas without rotation-resistant phenotypes, gene expression of the allele for resistance to transgenic corn was the most important factor affecting the development of resistance to transgenic corn. If the allele for resistance to transgenic corn is recessive, resistance can be delayed longer than 15 yr, but if the resistant allele is dominant then resistance usually developed within 15 yr. In a sensitivity analysis, among the parameters investigated, initial allele frequency and density dependence were the two most important factors affecting the evolution of resistance. We compared the results of this simulation model with a more complicated model and results between the two were similar. This indicates that results from a simpler model with a generational time-step can compare favorably with a more complex model with a daily time-step.
Automatic Time Stepping with Global Error Control for Groundwater Flow Models
Tang, Guoping
2008-09-01
An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.
Discontinuous Galerkin Methods and Local Time Stepping for Wave Propagation
Grote, M. J.; Mitkova, T.
2010-09-30
Locally refined meshes impose severe stability constraints on explicit time-stepping methods for the numerical simulation of time dependent wave phenomena. To overcome that stability restriction, local time-stepping methods are developed, which allow arbitrarily small time steps precisely where small elements in the mesh are located. When combined with a discontinuous Galerkin finite element discretization in space, which inherently leads to a diagonal mass matrix, the resulting numerical schemes are fully explicit. Starting from the classical Adams-Bashforth multi-step methods, local time stepping schemes of arbitrarily high accuracy are derived. Numerical experiments validate the theory and illustrate the usefulness of the proposed time integration schemes.
Efficient multiple time-stepping algorithms of higher order
NASA Astrophysics Data System (ADS)
Demirel, Abdullah; Niegemann, Jens; Busch, Kurt; Hochbruck, Marlis
2015-03-01
Multiple time-stepping (MTS) algorithms allow to efficiently integrate large systems of ordinary differential equations, where a few stiff terms restrict the timestep of an otherwise non-stiff system. In this work, we discuss a flexible class of MTS techniques, based on multistep methods. Our approach contains several popular methods as special cases and it allows for the easy construction of novel and efficient higher-order MTS schemes. In addition, we demonstrate how to adapt the stability contour of the non-stiff time-integration to the physical system at hand. This allows significantly larger timesteps when compared to previously known multistep MTS approaches. As an example, we derive novel predictor-corrector (PCMTS) schemes specifically optimized for the time-integration of damped wave equations on locally refined meshes. In a set of numerical experiments, we demonstrate the performance of our scheme on discontinuous Galerkin time-domain (DGTD) simulations of Maxwell's equations.
ADER-WENO finite volume schemes with space-time adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Dumbser, Michael; Zanotti, Olindo; Hidalgo, Arturo; Balsara, Dinshaw S.
2013-09-01
We present the first high order one-step ADER-WENO finite volume scheme with adaptive mesh refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e. with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the message passing interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergence study and a detailed analysis of the computational speed-up with respect to highly refined uniform meshes is also presented. We also show test problems where the presented high order AMR scheme behaves clearly better than traditional second order AMR methods. The proposed scheme that combines for the first time high order ADER methods with space-time adaptive grids in two and three space dimensions is likely to become a useful tool in several fields of computational physics, applied mathematics and mechanics.
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux
Crowder, D W; Onstad, D W; Cray, M E; Pierce, C M F; Hager, A G; Ratcliffe, S T; Steffey, K L
2005-04-01
Western corn rootworm, Diabrotica virgifera virgifera LeConte, has overcome crop rotation in several areas of the north central United States. The effectiveness of crop rotation for management of corn rootworm has begun to fail in many areas of the midwestern United States, thus new management strategies need to be developed to control rotation-resistant populations. Transgenic corn, Zea mays L., effective against western corn rootworm, may be the most effective new technology for control of this pest in areas with or without populations adapted to crop rotation. We expanded a simulation model of the population dynamics and genetics of the western corn rootworm for a landscape of corn; soybean, Glycine max (L.); and other crops to study the simultaneous development of resistance to both crop rotation and transgenic corn. Results indicate that planting transgenic corn to first-year cornfields is a robust strategy to prevent resistance to both crop rotation and transgenic corn in areas where rotation-resistant populations are currently a problem or may be a problem in the future. In these areas, planting transgenic corn only in continuous cornfields is not an effective strategy to prevent resistance to either trait. In areas without rotation-resistant populations, gene expression of the allele for resistance to transgenic corn, R, is the most important factor affecting the evolution of resistance. If R is recessive, resistance can be delayed longer than 15 yr. If R is dominant, resistance may be difficult to prevent. In a sensitivity analysis, results indicate that density dependence, rotational level in the landscape, and initial allele frequency are the three most important factors affecting the results.
Adaptive mobility management scheme in hierarchical mobile IPv6
NASA Astrophysics Data System (ADS)
Fang, Bo; Song, Junde
2004-04-01
Hierarchical mobile IPv6 makes the mobility management localized. Registration with HA is only needed while MN moving between MAP domains. This paper proposed an adaptive mobility management scheme based on the hierarchical mobile IPv6. The scheme focuses on the MN operation as well as MAP operation during the handoff. Adaptive MAP selection algorithm can be used to select a suitable MAP to register with once MN moves into a new subnet while MAP can thus adaptively changing his management domain. Furthermore, MAP can also adaptively changes its level in the hierarchical referring on the service load or other related information. Detailed handoff algorithm is also discussed in this paper.
Adaptive lifting scheme of wavelet transforms for image compression
NASA Astrophysics Data System (ADS)
Wu, Yu; Wang, Guoyin; Nie, Neng
2001-03-01
Aiming at the demand of adaptive wavelet transforms via lifting, a three-stage lifting scheme (predict-update-adapt) is proposed according to common two-stage lifting scheme (predict-update) in this paper. The second stage is updating stage. The third is adaptive predicting stage. Our scheme is an update-then-predict scheme that can detect jumps in image from the updated data and it needs not any more additional information. The first stage is the key in our scheme. It is the interim of updating. Its coefficient can be adjusted to adapt to data to achieve a better result. In the adaptive predicting stage, we use symmetric prediction filters in the smooth area of image, while asymmetric prediction filters at the edge of jumps to reduce predicting errors. We design these filters using spatial method directly. The inherent relationships between the coefficients of the first stage and the other stages are found and presented by equations. Thus, the design result is a class of filters with coefficient that are no longer invariant. Simulation result of image coding with our scheme is good.
Gearbox fault diagnosis using adaptive redundant Lifting Scheme
NASA Astrophysics Data System (ADS)
Hongkai, Jiang; Zhengjia, He; Chendong, Duan; Peng, Chen
2006-11-01
Vibration signals acquired from a gearbox usually are complex, and it is difficult to detect the symptoms of an inherent fault in a gearbox. In this paper, an adaptive redundant lifting scheme for the fault diagnosis of gearboxes is developed. It adopts data-based optimisation algorithm to lock on to the dominant structure of the signal, and well reveal the transient components of the vibration signal in time domain. Both lifting scheme and adaptive redundant lifting scheme are applied to analyse the experimental signal from a gearbox with wear fault and the practical vibration signal from a large air compressor. The results confirm that adaptive redundant lifting scheme is quite effective in extracting impulse and modulation feature components from the complex background.
Adaptive moving finite volume scheme for flood inundation modeling under dry and complex topography
NASA Astrophysics Data System (ADS)
Zhou, F.; Chen, G.
2012-04-01
To assess and alleviate the risk of flood inundation on local scale, the use of numerical models with high accuracy, spatial resolution, and efficiency is crucial for the reliability of the solutions to provide the forecasts and early-warnings of flood inundation at large or meso-scales. Different with traditional numerical models on fixed meshes, an adaptive moving finite volume scheme on moving meshes is proposed for flood inundation modeling under dry and complex topography, this scheme aims to improve the predictive accuracy, spatial resolution, and computational efficiency as well as the satisfaction of well-balanced positivity preserving properties. The crucial feature of our scheme is to move fixed number of unstructured triangular meshes adaptively for approximating the time-variant patterns of flow variables and then to update flow variables through PDEs discretization on new meshes. At each time step of simulation, this scheme consists of three parts, giving in time n for instance: (1) adaptive mesh movement equation for adapting vertex from xij(n, v) to xij(n,v+1) where v is the iteration step, this equation can be transferred as Euler-Lagrange ones⛛· (ω⛛x) = 0, in which the monitor functionω is determined by the solution and the gradient of solution; (2) geometrical conservative interpolation for remapping flow variables from Ui(n, v) to Ui(n,v+1), when ||xij(n,v+1)-xij(n, v)||≤10-6 or v=5, then set xij(n, +∞):= xij(n,v+1) and Uj(n, +∞):= Uj(n,v+1), and (3) HLL-based PDEs discretization for updating flow variables from Ui(n,+∞) to Ui(n+1,0), the treatments of bed slope source terms and wet-dry interface are based on second-order reconstruction of Audusse et al., (2004) and Audusse and Bristeau (2005). Two analytical and two experimental test cases were performed to verify the advantages of the proposed scheme over non-adaptive methods. The results revealed two attractive features: (i) this scheme could achieve high-accuracy and high
Block-based adaptive lifting schemes for multiband image compression
NASA Astrophysics Data System (ADS)
Masmoudi, Hela; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe
2004-02-01
In this paper, we are interested in designing lifting schemes adapted to the statistics of the wavelet coefficients of multiband images for compression applications. More precisely, nonseparable vector lifting schemes are used in order to capture simultaneously the spatial and the spectral redundancies. The underlying operators are then computed in order to minimize the entropy of the resulting multiresolution representation. To this respect, we have developed a new iterative block-based classification algorithm. Simulation tests carried out on remotely sensed multispectral images indicate that a substantial gain in terms of bit-rate is achieved by the proposed adaptive coding method w.r.t the non-adaptive one.
Adaptive schemes for incomplete quantum process tomography
Teo, Yong Siah; Englert, Berthold-Georg; Rehacek, Jaroslav; Hradil, Zdenek
2011-12-15
We propose an iterative algorithm for incomplete quantum process tomography with the help of quantum state estimation. The algorithm, which is based on the combined principles of maximum likelihood and maximum entropy, yields a unique estimator for an unknown quantum process when one has less than a complete set of linearly independent measurement data to specify the quantum process uniquely. We apply this iterative algorithm adaptively in various situations and so optimize the amount of resources required to estimate a quantum process with incomplete data.
Adaptable Iterative and Recursive Kalman Filter Schemes
NASA Technical Reports Server (NTRS)
Zanetti, Renato
2014-01-01
Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2015-04-01
We investigate the operational utility of fine time step hydro-climatic information using a large catchment data set. The originality of this data set lies in the availability of precipitation data from the 6-minute rain gauges of Météo-France, and in the size of the catchment set (217 French catchments in total). The rainfall-runoff model used (GR4) has been adapted to hourly and sub-hourly time steps (up to 6-minute) from the daily time step version (Perrin et al., 2003). The model is applied at different time steps ranging from 6-minute to 1 day (6-, 12-, 30-minute, 1-, 3-, 6-, 12-hour and 1 day) and the evolution of model performance for each catchment is evaluated at the daily time step by aggregation of model outputs. Three classes of behavior are found according to the trend of model performance as the time step becomes finer: (i) catchments presenting an improvement of model performance; (ii) catchments with a model performance insensitive to the time step; (iii) catchments for which the performance even deteriorates as the time step becomes finer. The reasons behind these different trends are investigated from a hydrological point of view, by relating the model sensitivity to data at finer time step to catchment descriptors. References: Perrin, C., C. Michel and V. Andréassian (2003), "Improvement of a parsimonious model for streamflow simulation", Journal of Hydrology, 279(1-4): 275-289.
An adaptive control scheme for a flexible manipulator
NASA Technical Reports Server (NTRS)
Yang, T. C.; Yang, J. C. S.; Kudva, P.
1987-01-01
The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.
On the dynamics of some grid adaption schemes
NASA Technical Reports Server (NTRS)
Sweby, Peter K.; Yee, Helen C.
1994-01-01
The dynamics of a one-parameter family of mesh equidistribution schemes coupled with finite difference discretisations of linear and nonlinear convection-diffusion model equations is studied numerically. It is shown that, when time marched to steady state, the grid adaption not only influences the stability and convergence rate of the overall scheme, but can also introduce spurious dynamics to the numerical solution procedure.
An efficient class of WENO schemes with adaptive order
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.; Garain, Sudip; Shu, Chi-Wang
2016-12-01
Finite difference WENO schemes have established themselves as very worthy performers for entire classes of applications that involve hyperbolic conservation laws. In this paper we report on two major advances that make finite difference WENO schemes more efficient. The first advance consists of realizing that WENO schemes require us to carry out stencil operations very efficiently. In this paper we show that the reconstructed polynomials for any one-dimensional stencil can be expressed most efficiently and economically in Legendre polynomials. By using Legendre basis, we show that the reconstruction polynomials and their corresponding smoothness indicators can be written very compactly. The smoothness indicators are written as a sum of perfect squares. Since this is a computationally expensive step, the efficiency of finite difference WENO schemes is enhanced by the innovation which is reported here. The second advance consists of realizing that one can make a non-linear hybridization between a large, centered, very high accuracy stencil and a lower order WENO scheme that is nevertheless very stable and capable of capturing physically meaningful extrema. This yields a class of adaptive order WENO schemes, which we call WENO-AO (for adaptive order). Thus we arrive at a WENO-AO(5,3) scheme that is at best fifth order accurate by virtue of its centered stencil with five zones and at worst third order accurate by virtue of being non-linearly hybridized with an r = 3 CWENO scheme. The process can be extended to arrive at a WENO-AO(7,3) scheme that is at best seventh order accurate by virtue of its centered stencil with seven zones and at worst third order accurate. We then recursively combine the above two schemes to arrive at a WENO-AO(7,5,3) scheme which can achieve seventh order accuracy when that is possible; graciously drop down to fifth order accuracy when that is the best one can do; and also operate stably with an r = 3 CWENO scheme when that is the only thing
Adaptive Tracking Control for Robots With an Interneural Computing Scheme.
Tsai, Feng-Sheng; Hsu, Sheng-Yi; Shih, Mau-Hsiang
2017-01-24
Adaptive tracking control of mobile robots requires the ability to follow a trajectory generated by a moving target. The conventional analysis of adaptive tracking uses energy minimization to study the convergence and robustness of the tracking error when the mobile robot follows a desired trajectory. However, in the case that the moving target generates trajectories with uncertainties, a common Lyapunov-like function for energy minimization may be extremely difficult to determine. Here, to solve the adaptive tracking problem with uncertainties, we wish to implement an interneural computing scheme in the design of a mobile robot for behavior-based navigation. The behavior-based navigation adopts an adaptive plan of behavior patterns learning from the uncertainties of the environment. The characteristic feature of the interneural computing scheme is the use of neural path pruning with rewards and punishment interacting with the environment. On this basis, the mobile robot can be exploited to change its coupling weights in paths of neural connections systematically, which can then inhibit or enhance the effect of flow elimination in the dynamics of the evolutionary neural network. Such dynamical flow translation ultimately leads to robust sensory-to-motor transformations adapting to the uncertainties of the environment. A simulation result shows that the mobile robot with the interneural computing scheme can perform fault-tolerant behavior of tracking by maintaining suitable behavior patterns at high frequency levels.
A generic efficient adaptive grid scheme for rocket propulsion modeling
NASA Technical Reports Server (NTRS)
Mo, J. D.; Chow, Alan S.
1993-01-01
The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.
An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery.
Leng, Xiangguang; Ji, Kefeng; Zhou, Shilin; Xing, Xiangwei; Zou, Huanxin
2016-08-23
With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way.
An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery
Leng, Xiangguang; Ji, Kefeng; Zhou, Shilin; Xing, Xiangwei; Zou, Huanxin
2016-01-01
With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way. PMID:27563902
Adaptive PCA based fault diagnosis scheme in imperial smelting process.
Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin
2014-09-01
In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently.
A Stable Adaptive Numerical Scheme for Hyperbolic Conservation Laws.
1983-05-01
Bradley J. Lucier Mathematics Research Center University of Wisconsin- Madison * 610 Walnut Strest Madison, Wisconsin 53706 *May 1983 (Received April 5... 1983 ) ITO FILE COPY~D Approved for public release D TICTE Ditiuinunlimited JUL 2 0 19N3 Sponsored by E U. S. Army Research office National Science...CENTER A STABLE ADAPTIVE NUMERICAL SCHEME FOR HYPERBOLIC CONSERVATION LAWS * Bradley J. Lucier Technical Summary Report #2517 May 1983 ABSTRACT A new
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
An adaptive nonlocal means scheme for medical image denoising
NASA Astrophysics Data System (ADS)
Thaipanich, Tanaphol; Kuo, C.-C. Jay
2010-03-01
Medical images often consist of low-contrast objects corrupted by random noise arising in the image acquisition process. Thus, image denoising is one of the fundamental tasks required by medical imaging analysis. In this work, we investigate an adaptive denoising scheme based on the nonlocal (NL)-means algorithm for medical imaging applications. In contrast with the traditional NL-means algorithm, the proposed adaptive NL-means (ANL-means) denoising scheme has three unique features. First, it employs the singular value decomposition (SVD) method and the K-means clustering (K-means) technique for robust classification of blocks in noisy images. Second, the local window is adaptively adjusted to match the local property of a block. Finally, a rotated block matching algorithm is adopted for better similarity matching. Experimental results from both additive white Gaussian noise (AWGN) and Rician noise are given to demonstrate the superior performance of the proposed ANL denoising technique over various image denoising benchmarks in term of both PSNR and perceptual quality comparison.
Towards Adaptive High-Resolution Images Retrieval Schemes
NASA Astrophysics Data System (ADS)
Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.
2016-10-01
Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.
Towards Adaptive High-Resolution Images Retrieval Schemes
NASA Astrophysics Data System (ADS)
Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.
2016-06-01
Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.
Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature
NASA Technical Reports Server (NTRS)
Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)
2000-01-01
This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.
Optimal time step for incompressible SPH
NASA Astrophysics Data System (ADS)
Violeau, Damien; Leroy, Agnès
2015-05-01
A classical incompressible algorithm for Smoothed Particle Hydrodynamics (ISPH) is analyzed in terms of critical time step for numerical stability. For this purpose, a theoretical linear stability analysis is conducted for unbounded homogeneous flows, leading to an analytical formula for the maximum CFL (Courant-Friedrichs-Lewy) number as a function of the Fourier number. This gives the maximum time step as a function of the fluid viscosity, the flow velocity scale and the SPH discretization size (kernel standard deviation). Importantly, the maximum CFL number at large Reynolds number appears twice smaller than with the traditional Weakly Compressible (WCSPH) approach. As a consequence, the optimal time step for ISPH is only five times larger than with WCSPH. The theory agrees very well with numerical data for two usual kernels in a 2-D periodic flow. On the other hand, numerical experiments in a plane Poiseuille flow show that the theory overestimates the maximum allowed time step for small Reynolds numbers.
Adaptive spatially dependent weighting scheme for tomosynthesis reconstruction
NASA Astrophysics Data System (ADS)
Levakhina, Yulia; Duschka, Robert; Vogt, Florian; Barkhausen, JOErg; Buzug, Thorsten M.
2012-03-01
Digital Tomosynthesis (DT) is an x-ray limited-angle imaging technique. An accurate image reconstruction in tomosynthesis is a challenging task due to the violation of the tomographic sufficiency conditions. A classical "shift-and-add" algorithm (or simple backprojection) suffers from blurring artifacts, produced by structures located above and below the plane of interest. The artifact problem becomes even more prominent in the presence of materials and tissues with a high x-ray attenuation, such as bones, microcalcifications or metal. The focus of the current work is on reduction of ghosting artifacts produced by bones in the musculoskeletal tomosynthesis. A novel dissimilarity concept and a modified backprojection with an adaptive spatially dependent weighting scheme (ωBP) are proposed. Simulated data of software phantom, a structured hardware phantom and a human hand raw-data acquired with a Siemens Mammomat Inspiration tomosynthesis system were reconstructed using conventional backprojection algorithm and the new ωBP-algorithm. The comparison of the results to the non-weighted case demonstrates the potential of the proposed weighted backprojection to reduce the blurring artifacts in musculoskeletal DT. The proposed weighting scheme is not limited to the tomosynthesis limitedangle geometry. It can also be adapted for Computed Tomography (CT) and included in iterative reconstruction algorithms (e.g. SART).
An Adaptive Motion Estimation Scheme for Video Coding
Gao, Yuan; Jia, Kebin
2014-01-01
The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313
An adaptive identification and control scheme for large space structures
NASA Technical Reports Server (NTRS)
Carroll, J. V.
1988-01-01
A unified identification and control scheme capable of achieving space at form performance objectives under nominal or failure conditions is described. Preliminary results are also presented, showing that the methodology offers much promise for effective robust control of large space structures. The control method is a multivariable, adaptive, output predictive controller called Model Predictive Control (MPC). MPC uses a state space model and input reference trajectories of set or tracking points to adaptively generate optimum commands. For a fixed model, MPC processes commands with great efficiency, and is also highly robust. A key feature of MPC is its ability to control either nonminimum phase or open loop unstable systems. As an output controller, MPC does not explicitly require full state feedback, as do most multivariable (e.g., Linear Quadratic) methods. Its features are very useful in LSS operations, as they allow non-collocated actuators and sensors. The identification scheme is based on canonical variate analysis (CVA) of input and output data. The CVA technique is particularly suited for the measurement and identification of structural dynamic processes - that is, unsteady transient or dynamically interacting processes such as between aerodynamics and structural deformation - from short, noisy data. CVA is structured so that the identification can be done in real or near real time, using computationally stable algorithms. Modeling LSS dynamics in 1-g laboratories has always been a major impediment not only to understanding their behavior in orbit, but also to controlling it. In cases where the theoretical model is not confirmed, current methods provide few clues concerning additional dynamical relationships that are not included in the theoretical models. CVA needs no a priori model data, or structure; all statistically significant dynamical states are determined using natural, entropy-based methods. Heretofore, a major limitation in applying adaptive
Optimization Integrator for Large Time Steps.
Gast, Theodore F; Schroeder, Craig; Stomakhin, Alexey; Jiang, Chenfanfu; Teran, Joseph M
2015-10-01
Practical time steps in today's state-of-the-art simulators typically rely on Newton's method to solve large systems of nonlinear equations. In practice, this works well for small time steps but is unreliable at large time steps at or near the frame rate, particularly for difficult or stiff simulations. We show that recasting backward Euler as a minimization problem allows Newton's method to be stabilized by standard optimization techniques with some novel improvements of our own. The resulting solver is capable of solving even the toughest simulations at the [Formula: see text] frame rate and beyond. We show how simple collisions can be incorporated directly into the solver through constrained minimization without sacrificing efficiency. We also present novel penalty collision formulations for self collisions and collisions against scripted bodies designed for the unique demands of this solver. Finally, we show that these techniques improve the behavior of Material Point Method (MPM) simulations by recasting it as an optimization problem.
Accurate and stable time stepping in ice sheet modeling
NASA Astrophysics Data System (ADS)
Cheng, Gong; Lötstedt, Per; von Sydow, Lina
2017-01-01
In this paper we introduce adaptive time step control for simulation of the evolution of ice sheets. The discretization error in the approximations is estimated using "Milne's device" by comparing the result from two different methods in a predictor-corrector pair. Using a predictor-corrector pair the expensive part of the procedure, the solution of the velocity and pressure equations, is performed only once per time step and an estimate of the local error is easily obtained. The stability of the numerical solution is maintained and the accuracy is controlled by keeping the local error below a given threshold using PI-control. Depending on the threshold, the time step Δt is bound by stability requirements or accuracy requirements. Our method takes a shorter Δt than an implicit method but with less work in each time step and the solver is simpler. The method is analyzed theoretically with respect to stability and applied to the simulation of a 2D ice slab and a 3D circular ice sheet. The stability bounds in the experiments are explained by and agree well with the theoretical results.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Technical Reports Server (NTRS)
Lam, Quang; Ray, Surendra N.
1995-01-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown
NASA Astrophysics Data System (ADS)
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a "special divergence-free" (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
Finn, John M.
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012
An adaptive nonlinear solution scheme for reservoir simulation
Lett, G.S.
1996-12-31
Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.
Short‐term time step convergence in a climate model
Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane
2015-01-01
Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
2014-10-01
Number: SET 2015-0030 412 TW-PA-14481 Adaptive Modulation Schemes for OFDM and...SUBTITLE Adaptive Modulation Schemes for OFDM and SOQPSK Using Error Vector Magnitude (EVM) and Godard Dispersion 5a. CONTRACT NUMBER: W900KK-13-C...schemes of OFDM and SOQPSK? • Possible Approaches: • Find a common metric that applies for both OFDM and SOQPSK • Find the relationship between two
A method for improving time-stepping numerics
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-04-01
In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.
Multiple time step integrators in ab initio molecular dynamics
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
An adaptive, Courant-number-dependent implicit scheme for vertical advection in oceanic modeling
NASA Astrophysics Data System (ADS)
Shchepetkin, Alexander F.
2015-07-01
An oceanic model with an Eulerian vertical coordinate and an explicit vertical advection scheme is subject to the Courant-Friedrichs-Lewy (CFL) limitation. Depending on the horizontal grid spacing, the horizontal-to-vertical grid resolution ratio and the flow pattern this limitation may easily become the most restrictive factor in choosing model time step, with the general tendency to become more severe as horizontal resolution becomes finer. Using terrain-following coordinate makes local vertical grid spacing depend on topography, ultimately resulting in very fine resolution in shallow areas in comparison with other models, z-coordinate, and isopycnic, which adds another factor in restricting time step. At the same time, terrain-following models are models of choice for the fine-resolution coastal modeling, often including tides interacting with topography resulting in large amplitude baroclinic vertical motions. In this article we examine the possibility of mitigating vertical CFL restriction, while at the same time avoiding numerical inaccuracies associated with standard implicit advection schemes. In doing so we design a combined algorithm which acts like a high-order explicit scheme when Courant numbers are small enough to allow explicit method (which is usually the case throughout the entire modeling domain except just few "hot spots"), while at the same time has the ability to adjust itself toward implicit scheme should it became necessary to avoid stability limitations. This is done in a seamless manner by continuously adjusting weighting between explicit and implicit components.
Variable time-stepping in the pathwise numerical solution of the chemical Langevin equation.
Ilie, Silvana
2012-12-21
Stochastic modeling is essential for an accurate description of the biochemical network dynamics at the level of a single cell. Biochemically reacting systems often evolve on multiple time-scales, thus their stochastic mathematical models manifest stiffness. Stochastic models which, in addition, are stiff and computationally very challenging, therefore the need for developing effective and accurate numerical methods for approximating their solution. An important stochastic model of well-stirred biochemical systems is the chemical Langevin Equation. The chemical Langevin equation is a system of stochastic differential equation with multidimensional non-commutative noise. This model is valid in the regime of large molecular populations, far from the thermodynamic limit. In this paper, we propose a variable time-stepping strategy for the numerical solution of a general chemical Langevin equation, which applies for any level of randomness in the system. Our variable stepsize method allows arbitrary values of the time-step. Numerical results on several models arising in applications show significant improvement in accuracy and efficiency of the proposed adaptive scheme over the existing methods, the strategies based on halving/doubling of the stepsize and the fixed step-size ones.
On the maximum time step in weakly compressible SPH
NASA Astrophysics Data System (ADS)
Violeau, Damien; Leroy, Agnès
2014-01-01
In the SPH method for viscous fluids, the time step is subject to empirical stability criteria. We proceed to a stability analysis of the Weakly Compressible SPH equations using the von Neumann approach in arbitrary space dimension for unbounded flow. Considering the continuous SPH interpolant based on integrals, we obtain a theoretical stability criterion for the time step, depending on the kernel standard deviation, the speed of sound and the viscosity. The stability domain appears to be almost independent of the kernel choice for a given space discretisation. Numerical tests show that the theory is very accurate, despite the approximations made. We then extend the theory in order to study the influence of the method used to compute the density, of the gradient and divergence SPH operators, of background pressure, of the model used for viscous forces and of a constant velocity gradient. The influence of time integration scheme is also studied, and proved to be prominent. All of the above theoretical developments give excellent agreement against numerical results. It is found that velocity gradients almost do not affect stability, provided some background pressure is used. Finally, the case of bounded flows is briefly addressed from numerical tests in three cases: a laminar Poiseuille flow in a pipe, a lid-driven cavity and the collapse of a water column on a wedge.
2014-06-01
Document Number: SET 2014-0039 412TW-PA-14271 Adaptive Modulation Schemes for OFDM and SOQPSK Using Error Vector Magnitude (EVM...4. TITLE AND SUBTITLE Adaptive Modulation Schemes for OFDM and SOQPSK Using Error Vector Magnitude (EVM) and Godard Dispersion 5a. CONTRACT NUMBER...multiplexing ( OFDM ) and shaped-offset quadrature phased-shift keying (SOQPSK). We present the error vector magnitude (EVM) for OFDM and second-order Godard
Adaptive nonseparable vector lifting scheme for digital holographic data compression.
Xing, Yafei; Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Dufaux, Frédéric
2015-01-01
Holographic data play a crucial role in recent three-dimensional imaging as well as microscopic applications. As a result, huge amounts of storage capacity will be involved for this kind of data. Therefore, it becomes necessary to develop efficient hologram compression schemes for storage and transmission purposes. In this paper, we focus on the shifted distance information, obtained by the phase-shifting algorithm, where two sets of difference data need to be encoded. More precisely, a nonseparable vector lifting scheme is investigated in order to exploit the two-dimensional characteristics of the holographic contents. Simulations performed on different digital holograms have shown the effectiveness of the proposed method in terms of bitrate saving and quality of object reconstruction.
An adaptive additive inflation scheme for Ensemble Kalman Filters
NASA Astrophysics Data System (ADS)
Sommer, Matthias; Janjic, Tijana
2016-04-01
Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.
ERIC Educational Resources Information Center
Johnson, Burke; Strodl, Peter
This paper presents a sensitizing conceptual scheme for examining interpersonal adaptation in urban classrooms. The construct "interpersonal adaptation" is conceptualized as the interaction of individual/personality factors, interpersonal factors, and social/cultural factors. The model is applied to the urban school. The conceptual…
Position control of redundant manipulators using an adaptive error-based control scheme
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Zhou, Zhen-Lei
1990-01-01
A Cartesian-space control scheme is developed to control the motion of kinematically redundant manipulators with 7 degrees of freedom (DOF). The control scheme consists mainly of proportional derivative (PD) controllers whose gains are adjusted by an adaptation law driven by the errors between the desired and actual trajectories. The adaptation law is derived using the concept of model reference adaptive control (MRAC) and Lyapunov direct method under the assumption that the manipulator performs non-compliant and slowly-varying motions. The developed control scheme is computationally efficient because its implementation does not require the computation of the manipulator dynamics. Computer simulation performed to evaluate the control scheme performance is presented and discussed.
Automatic multirate methods for ordinary differential equations. [Adaptive time steps
Gear, C.W.
1980-01-01
A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.
A Quasi-Conservative Adaptive Semi-Lagrangian Advection-Diffusion Scheme
NASA Astrophysics Data System (ADS)
Behrens, Joern
2014-05-01
Many processes in atmospheric or oceanic tracer transport are conveniently represented by advection-diffusion type equations. Depending on the magnitudes of both components, the mathematical representation and consequently the discretization is a non-trivial problem. We will focus on advection-dominated situations and will introduce a semi-Lagrangian scheme with adaptive mesh refinement for high local resolution. This scheme is well suited for pollutant transport from point sources, or transport processes featuring fine filamentation with corresponding local concentration maxima. In order to achieve stability, accuracy and conservation, we combine an adaptive mesh refinement quasi-conservative semi-Lagrangian scheme, based on an integral formulation of the underlying advective conservation law (Behrens, 2006), with an advection diffusion scheme as described by Spiegelman and Katz (2006). The resulting scheme proves to be conservative and stable, while maintaining high computational efficiency and accuracy.
An adaptive actuator failure compensation scheme for two linked 2WD mobile robots
NASA Astrophysics Data System (ADS)
Ma, Yajie; Al-Dujaili, Ayad; Cocquempot, Vincent; El Badaoui El Najjar, Maan
2017-01-01
This paper develops a new adaptive compensation control scheme for two linked mobile robots with actuator failurs. A configuration with two linked two-wheel drive (2WD) mobile robots is proposed, and the modelling of its kinematics and dynamics are given. An adaptive failure compensation scheme is developed to compensate actuator failures, consisting of a kinematic controller and a multi-design integration based dynamic controller. The kinematic controller is a virtual one, and based on which, multiple adaptive dynamic control signals are designed which covers all possible failure cases. By combing these dynamic control signals, the dynamic controller is designed, which ensures system stability and asymptotic tracking properties. Simulation results verify the effectiveness of the proposed adaptive failure compensation scheme.
Sensitivity of a thermodynamic sea ice model with leads to time step size
NASA Technical Reports Server (NTRS)
Ledley, T. S.
1985-01-01
The characteristics of sea ice models, developed to study the physics of the growth and melt of ice at the ocean surface and the variations in ice extent, depend on the size of the time step. Thus, to study longer-term variations within a reasonable computer budget, a model with a scheme allowing longer time steps has been constructed. However, the results produced by the model can definitely depend on the length of the time step. The sensitivity of a model to time-step size can be reduced by appropriate approaches. The present investigation is concerned with experiments which use a formulation of a lead parameterization that can be considered as a first step toward the development of a lead parameterization suitable for a use in long-term climate studies.
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm
Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah
2015-01-01
A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974
Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme
NASA Astrophysics Data System (ADS)
Hickmann, K. S.; Godinez, H. C.
2015-12-01
When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.
Adaptive Test Schemes for Control of Paratuberculosis in Dairy Cows
Græsbøll, Kaare; Nielsen, Søren Saxmose; Christiansen, Lasse Engbo; Toft, Nils; Halasa, Tariq
2016-01-01
Paratuberculosis is a chronic infection that in dairy cattle causes reduced milk yield, weight loss, and ultimately fatal diarrhea. Subclinical animals can excrete bacteria (Mycobacterium avium ssp. paratuberculosis, MAP) in feces and infect other animals. Farmers identify the infectious animals through a variety of test-strategies, but are challenged by the lack of perfect tests. Frequent testing increases the sensitivity but the costs of testing are a cause of concern for farmers. Here, we used a herd simulation model using milk ELISA tests to evaluate the epidemiological and economic consequences of continuously adapting the sampling interval in response to the estimated true prevalence in the herd. The key results were that the true prevalence was greatly affected by the hygiene level and to some extent by the test-frequency. Furthermore, the choice of prevalence that will be tolerated in a control scenario had a major impact on the true prevalence in the normal hygiene setting, but less so when the hygiene was poor. The net revenue is not greatly affected by the test-strategy, because of the general variation in net revenues between farms. An exception to this is the low hygiene herd, where frequent testing results in lower revenue. When we look at the probability of eradication, then it is correlated with the testing frequency and the target prevalence during the control phase. The probability of eradication is low in the low hygiene herd, and a test-and-cull strategy should probably not be the primary strategy in this herd. Based on this study we suggest that, in order to control MAP, the standard Danish dairy farm should use an adaptive strategy where a short sampling interval of three months is used when the estimated true prevalence is above 1%, and otherwise use a long sampling interval of one year. PMID:27907192
NASA Astrophysics Data System (ADS)
Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.
2010-03-01
Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model
A Self-Adaptive Behavior-Aware Recruitment Scheme for Participatory Sensing.
Zeng, Yuanyuan; Li, Deshi
2015-09-16
Participatory sensing services utilizing the abundant social participants with sensor-enabled handheld smart device resources are gaining high interest nowadays. One of the challenges faced is the recruitment of participants by fully utilizing their daily activity behavior with self-adaptiveness toward the realistic application scenarios. In the paper, we propose a self-adaptive behavior-aware recruitment scheme for participatory sensing. People are assumed to join the sensing tasks along with their daily activity without pre-defined ground truth or any instructions. The scheme is proposed to model the tempo-spatial behavior and data quality rating to select participants for participatory sensing campaign. Based on this, the recruitment is formulated as a linear programming problem by considering tempo-spatial coverage, data quality, and budget. The scheme enables one to check and adjust the recruitment strategy adaptively according to application scenarios. The evaluations show that our scheme provides efficient sensing performance as stability, low-cost, tempo-spatial correlation and self-adaptiveness.
A stable interface element scheme for the p-adaptive lifting collocation penalty formulation
NASA Astrophysics Data System (ADS)
Cagnone, J. S.; Nadarajah, S. K.
2012-02-01
This paper presents a procedure for adaptive polynomial refinement in the context of the lifting collocation penalty (LCP) formulation. The LCP scheme is a high-order unstructured discretization method unifying the discontinuous Galerkin, spectral volume, and spectral difference schemes in single differential formulation. Due to the differential nature of the scheme, the treatment of inter-cell fluxes for spatially varying polynomial approximations is not straightforward. Specially designed elements are proposed to tackle non-conforming polynomial approximations. These elements are constructed such that a conforming interface between polynomial approximations of different degrees is recovered. The stability and conservation properties of the scheme are analyzed and various inviscid compressible flow calculations are performed to demonstrate the potential of the proposed approach.
Adaptivity with near-orthogonality constraint for high compression rates in lifting scheme framework
NASA Astrophysics Data System (ADS)
Sliwa, Tadeusz; Voisin, Yvon; Diou, Alain
2004-01-01
Since few years, Lifting Scheme has proven its utility in compression field. It permits to easily create fast, reversible, separable or no, not necessarily linear, multiresolution analysis for sound, image, video or even 3D graphics. An interesting feature of lifting scheme is the ability to build adaptive transforms for compression, more easily than with other decompositions. Many works have already be done in this subject, especially in lossless or near-lossless compression framework : better compression than with usually used methods can be obtained. However, most of the techniques used in adaptive near-lossless compression can not be extended to higher lossy compression rates, even in the simplest cases. Indeed, this is due to the quantization error introduced before coding, which has not controlled propagation through inverse transform. Authors have put their interest to the classical Lifting Scheme, with linear convolution filters, but they studied criterions to maintain a high level of adaptivity and a good error propagation through inverse transform. This article aims to present relatively simple criterion to obtain filters able to build image and video compression with high compression rate, tested here with the Spiht coder. For this, upgrade and predict filters are simultaneously adapted thanks to a constrained least-square method. The constraint consists in a near-orthogonality inequality, letting sufficiently high level of adaptivity. Some compression results are given, illustrating relevance of this method, even with short filters.
A 3D finite-volume scheme for the Euler equations on adaptive tetrahedral grids
Vijayan, P.; Kallinderis, Y. )
1994-08-01
The paper describes the development and application of a new Euler solver for adaptive tetrahedral grids. Spatial discretization uses a finite-volume, node-based scheme that is of central-differencing type. A second-order Taylor series expansion is employed to march the solution in time according to the Lax-Wendroff approach. Special upwind-like smoothing operators for unstructured grids are developed for shock-capturing, as well as for suppression of solution oscillations. The scheme is formulated so that all operations are edge-based, which reduces the computational effort significantly. An adaptive grid algorithm is employed in order to resolve local flow features. This is achieved by dividing the tetrahedral cells locally, guided by a flow feature detection algorithm. Application cases include transonic flow around the ONERA M6 wing and transonic flow past a transport aircraft configuration. Comparisons with experimental data evaluate accuracy of the developed adaptive solver. 31 refs., 33 figs.
An adaptive remeshing scheme for vortex dominated flows using three-dimensional unstructured grids
NASA Astrophysics Data System (ADS)
Parikh, Paresh
1995-10-01
An adaptive remeshing procedure for vortex dominated flows is described, which uses three-dimensional unstructured grids. Surface grid adaptation is achieved using the static pressure as an adaptation parameter, while entropy is used in the field to accurately identify high vorticity regions. An emphasis has been placed in making the scheme as automatic as possible so that a minimum user interaction is required between remeshing cycles. Adapted flow solutions are obtained on two sharp-edged configurations at low speed, high angle-of-attack flow conditions. The results thus obtained are compared with fine grid CFD solutions and experimental data, and conclusions are drawn as to the efficiency of the adaptive procedure.
Adaptive 2-D wavelet transform based on the lifting scheme with preserved vanishing moments.
Vrankic, Miroslav; Sersic, Damir; Sucic, Victor
2010-08-01
In this paper, we propose novel adaptive wavelet filter bank structures based on the lifting scheme. The filter banks are nonseparable, based on quincunx sampling, with their properties being pixel-wise adapted according to the local image features. Despite being adaptive, the filter banks retain a desirable number of primal and dual vanishing moments. The adaptation is introduced in the predict stage of the filter bank with an adaptation region chosen independently for each pixel, based on the intersection of confidence intervals (ICI) rule. The image denoising results are presented for both synthetic and real-world images. It is shown that the obtained wavelet decompositions perform well, especially for synthetic images that contain periodic patterns, for which the proposed method outperforms the state of the art in image denoising.
Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow
NASA Astrophysics Data System (ADS)
Wood, William Alfred, III
production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks
Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490
An adaptive handover prediction scheme for seamless mobility based wireless networks.
Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.
Lin, Horng-Horng; Chuang, Jen-Hui; Liu, Tyng-Luh
2011-03-01
To model a scene for background subtraction, Gaussian mixture modeling (GMM) is a popular choice for its capability of adaptation to background variations. However, GMM often suffers from a tradeoff between robustness to background changes and sensitivity to foreground abnormalities and is inefficient in managing the tradeoff for various surveillance scenarios. By reviewing the formulations of GMM, we identify that such a tradeoff can be easily controlled by adaptive adjustments of the GMM's learning rates for image pixels at different locations and of distinct properties. A new rate control scheme based on high-level feedback is then developed to provide better regularization of background adaptation for GMM and to help resolving the tradeoff. Additionally, to handle lighting variations that change too fast to be caught by GMM, a heuristic rooting in frame difference is proposed to assist the proposed rate control scheme for reducing false foreground alarms. Experiments show the proposed learning rate control scheme, together with the heuristic for adaptation of over-quick lighting change, gives better performance than conventional GMM approaches.
Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
A well-balanced numerical scheme for shallow water simulation on adaptive grids
NASA Astrophysics Data System (ADS)
Zhang, H. J.; Zhou, J. Z.; Bi, S.; Li, Q. Q.; Fan, Y.
2014-04-01
The efficiency of solving two-dimensional shallow-water equations (SWEs) is vital for simulation of large-scale flood inundation. For flood flows over real topography, local high-resolution method, which uses adaptable grids, is required in order to prevent the loss of accuracy of the flow pattern while saving computational cost. This paper introduces an adaptive grid model, which uses an adaptive criterion calculated on the basis of the water lever. The grid adaption is performed by manipulating subdivision levels of the computation grids. As the flow feature varies during the shallow wave propagation, the local grid density changes adaptively and the stored information of neighbor relationship updates correspondingly, achieving a balance between the model accuracy and running efficiency. In this work, a well-balanced (WB) scheme for solving SWEs is introduced. In reconstructions of Riemann state, the definition of the unique bottom elevation on grid interfaces is modified, and the numerical scheme is pre-balanced automatically. By the validation against two idealist test cases, the proposed model is applied to simulate flood inundation due to a dam-break of Zhanghe Reservoir, Hubei province, China. The results show that the presented model is robust and well-balanced, has nice computational efficiency and numerical stability, and thus has bright application prospects.
Adaptive multiresolution WENO schemes for multi-species kinematic flow models
Buerger, Raimund . E-mail: rburger@ing-mat.udec.cl; Kozakevicius, Alice . E-mail: alicek@smail.ufsm.br
2007-06-10
Multi-species kinematic flow models lead to strongly coupled, nonlinear systems of first-order, spatially one-dimensional conservation laws. The number of unknowns (the concentrations of the species) may be arbitrarily high. Models of this class include a multi-species generalization of the Lighthill-Whitham-Richards traffic model and a model for the sedimentation of polydisperse suspensions. Their solutions typically involve kinematic shocks separating areas of constancy, and should be approximated by high resolution schemes. A fifth-order weighted essentially non-oscillatory (WENO) scheme is combined with a multiresolution technique that adaptively generates a sparse point representation (SPR) of the evolving numerical solution. Thus, computational effort is concentrated on zones of strong variation near shocks. Numerical examples from the traffic and sedimentation models demonstrate the effectiveness of the resulting WENO multiresolution (WENO-MRS) scheme.
An adaptive scaling and biasing scheme for OFDM-based visible light communication systems.
Wang, Zhaocheng; Wang, Qi; Chen, Sheng; Hanzo, Lajos
2014-05-19
Orthogonal frequency-division multiplexing (OFDM) has been widely used in visible light communication systems to achieve high-rate data transmission. Due to the nonlinear transfer characteristics of light emitting diodes (LEDs) and owing the high peak-to-average-power ratio of OFDM signals, the transmitted signal has to be scaled and biased before modulating the LEDs. In this contribution, an adaptive scaling and biasing scheme is proposed for OFDM-based visible light communication systems, which fully exploits the dynamic range of the LEDs and improves the achievable system performance. Specifically, the proposed scheme calculates near-optimal scaling and biasing factors for each specific OFDM symbol according to the distribution of the signals, which strikes an attractive trade-off between the effective signal power and the clipping-distortion power. Our simulation results demonstrate that the proposed scheme significantly improves the performance without changing the LED's emitted power, while maintaining the same receiver structure.
An Adaptive Semi-Implicit Scheme for Simulations of Unsteady Viscous Compressible Flows
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David; Crutchfield, William Y.; Bell, John B.; Colella, Phillip
1995-01-01
A numerical scheme for simulation of unsteady, viscous, compressible flows is considered. The scheme employs an explicit discretization of the inviscid terms of the Navier-Stokes equations and an implicit discretization of the viscous terms. The discretization is second order accurate in both space and time. Under appropriate assumptions, the implicit system of equations can be decoupled into two linear systems of reduced rank. These are solved efficiently using a Gauss-Seidel method with multigrid convergence acceleration. When coupled with a solution-adaptive mesh refinement technique, the hybrid explicit-implicit scheme provides an effective methodology for accurate simulations of unsteady viscous flows. The methodology is demonstrated for both body-fitted structured grids and for rectangular (Cartesian) grids.
Rodrigues, Joel J. P. C.
2014-01-01
This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes. PMID:25302327
Design and Analysis of Schemes for Adapting Migration Intervals in Parallel Evolutionary Algorithms.
Mambrini, Andrea; Sudholt, Dirk
2015-01-01
The migration interval is one of the fundamental parameters governing the dynamic behaviour of island models. Yet, there is little understanding on how this parameter affects performance, and how to optimally set it given a problem in hand. We propose schemes for adapting the migration interval according to whether fitness improvements have been found. As long as no improvement is found, the migration interval is increased to minimise communication. Once the best fitness has improved, the migration interval is decreased to spread new best solutions more quickly. We provide a method for obtaining upper bounds on the expected running time and the communication effort, defined as the expected number of migrants sent. Example applications of this method to common example functions show that our adaptive schemes are able to compete with, or even outperform, the optimal fixed choice of the migration interval, with regard to running time and communication effort.
Effect of Time Step On Atmospheric Model Systematic Errors
NASA Astrophysics Data System (ADS)
Williamson, D. L.
Semi-Lagrangian approximations are becoming more common in operational Numer- ical Weather Prediction models because of the efficiency allowed by their long time steps. The early work that demonstrated that semi-Lagrangian forecasts were compa- rable to Eulerian in accuracy were based on mid-latitude short-range forecasts which were dominated by dynamical processes. These indicated no significant loss of accu- racy with semi-Lagrangian approximations and long time steps. Today, subgrid-scale parameterizations play a larger role in even short range forecasts. While not ignored, the effect of a longer time step on the parameterizations has been less thoroughly stud- ied. We present results from the NCAR CCM3 that indicate that the systematic errors in tropical precipitation patterns can depend on the time step. The actual dependency depends on the parameterization suite of the model. We identify the dependency in aqua-planet integrations. With the CCM3 parameterization suite, longer time steps re- sult in double precipitation maxima straddling the SST maximum while shorter time steps result in a single precipitation maximum over the SST maximum. Other param- eterization suites behave differently. The cause of the dependency will be discussed.
Near-orthogonal and adaptive affine lifting scheme on vector-valued signals
NASA Astrophysics Data System (ADS)
Sliwa, Tadeusz; Voisin, Yvon; Diou, Alain
2004-02-01
Lifting Scheme is actually a widely used second generation multi-resolution technique in image and video processing field. It permits to easily create fast, reversible, separable or no, not necessarily linear, multi-resolution analysis for sound, image, video or even 3D graphics. An interesting feature of lifting scheme is the ability to build adaptive transforms, more easily than with other decompositions. Many works have already be done in this subject, especially in lossless or near-lossless compression framework where there is no orthogonal constraint. However, some applications as lossy compression or de-noising requires well conditioned transforms. Indeed, this is due to the use of shrinking or quantization which has not controlled propagation through inverse transform. Authors have recently presented a technique permitting to determine some lifting scheme filters in order to obtain a high level of adaptivity combined with near-orthogonal properties, useful for most of these applications. Naturly coming into the adaptive near orthogonal framework, the point of interest of this article is affine algebraic filters. Color images and video have especially been studied through point of view of compression. In this way, the treatment of the vector aspect of signal, not only by processing channels independently, becomes the focus point of the article.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-08-12
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
NASA Astrophysics Data System (ADS)
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-08-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
NASA Astrophysics Data System (ADS)
Kim, Jinsul; Lee, Hyunwoo; Ryu, Won; Lee, Byungsun; Hahn, Minsoo
In this letter, we propose a shared adaptive packet loss concealment scheme for the high quality guaranteed Internet telephony service which connects multiple users. In order to recover packet loss efficiently in the all-IP based convergence environment, we provide a robust signal recovery scheme which is based on the shared adaptive both-side information utilization. This scheme is provided according to the average magnitude variation across the frames and the pitch period replication on the 1-port gateway (G/W) system. The simulated performance demonstrates that the proposed scheme has the advantages of low processing times and high recovery rates in the all-IP based ubiquitous environment.
A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs.
Liu, Anfeng; Liu, Xiao; Long, Jun
2016-03-30
Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%.
An adaptive error modeling scheme for the lossless compression of EEG signals.
Sriraam, N; Eswaran, C
2008-09-01
Lossless compression of EEG signal is of great importance for the neurological diagnosis as the specialists consider the exact reconstruction of the signal as a primary requirement. This paper discusses a lossless compression scheme for EEG signals that involves a predictor and an adaptive error modeling technique. The prediction residues are arranged based on the error count through an histogram computation. Two optimal regions are identified in the histogram plot through a heuristic search such that the bit requirement for encoding the two regions is minimum. Further improvement in the compression is achieved by removing the statistical redundancy that is present in the residue signal by using a context-based bias cancellation scheme. Three neural network predictors, namely, single-layer perceptron, multilayer perceptron, and Elman network and two linear predictors, namely, autoregressive model and finite impulse response filter are considered. Experiments are conducted using EEG signals recorded under different physiological conditions and the performances of the proposed methods are evaluated in terms of the compression ratio. It is shown that the proposed adaptive error modeling schemes yield better compression results compared to other known compression methods.
A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs
Liu, Anfeng; Liu, Xiao; Long, Jun
2016-01-01
Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566
Long-term planetary integration with individual time steps
NASA Astrophysics Data System (ADS)
Saha, Prasenjit; Tremaine, Scott
1994-11-01
We describe an algorithm for long-term planetary orbit integrations, including the dominant post-Newtonian effects, that employs individual time steps for each planet. The algorithm is symplectic and exhibits short-term errors that are O(epsilon(Omega2)(tau 2) where tau is the time step, Omega is a typical orbital frequency, and epsilon much less than 1 is a typical planetary mass in solar units. By a special starting procedure long-term errors over an integration interval T can be reduced to O(epsilon2(Omega3)(tau2)T. A sample 0.8 Myr integration of the nine planets illustrates that Pluto can have a time step more than 100 times Mercury's, without dominating the positional error. Our algorithm is applicable to other N-body systems.
Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas
Cohen, B I; Dimits, A; Friedman, A; Caflisch, R
2009-10-29
The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.
Yasui, Kotaro; Sakai, Kazuhiko; Kano, Takeshi; Owaki, Dai; Ishiguro, Akio
2017-01-01
Recently, myriapods have attracted the attention of engineers because mobile robots that mimic them potentially have the capability of producing highly stable, adaptive, and resilient behaviors. The major challenge here is to develop a control scheme that can coordinate their numerous legs in real time, and an autonomous decentralized control could be the key to solve this problem. Therefore, we focus on real centipedes and aim to design a decentralized control scheme for myriapod robots by drawing inspiration from behavioral experiments on centipede locomotion under unusual conditions. In the behavioral experiments, we observed the response to the removal of a part of the terrain and to amputation of several legs. Further, we determined that the ground reaction force is significant for generating rhythmic leg movements; the motion of each leg is likely affected by a sensory input from its neighboring legs. Thus, we constructed a two-dimensional model wherein a simple local reflexive mechanism was implemented in each leg. We performed simulations by using this model and demonstrated that the myriapod robot could move adaptively to changes in the environment and body properties. Our findings will shed new light on designing adaptive and resilient myriapod robots that can function under various circumstances. PMID:28152103
Yasui, Kotaro; Sakai, Kazuhiko; Kano, Takeshi; Owaki, Dai; Ishiguro, Akio
2017-01-01
Recently, myriapods have attracted the attention of engineers because mobile robots that mimic them potentially have the capability of producing highly stable, adaptive, and resilient behaviors. The major challenge here is to develop a control scheme that can coordinate their numerous legs in real time, and an autonomous decentralized control could be the key to solve this problem. Therefore, we focus on real centipedes and aim to design a decentralized control scheme for myriapod robots by drawing inspiration from behavioral experiments on centipede locomotion under unusual conditions. In the behavioral experiments, we observed the response to the removal of a part of the terrain and to amputation of several legs. Further, we determined that the ground reaction force is significant for generating rhythmic leg movements; the motion of each leg is likely affected by a sensory input from its neighboring legs. Thus, we constructed a two-dimensional model wherein a simple local reflexive mechanism was implemented in each leg. We performed simulations by using this model and demonstrated that the myriapod robot could move adaptively to changes in the environment and body properties. Our findings will shed new light on designing adaptive and resilient myriapod robots that can function under various circumstances.
An adaptive fault-tolerant event detection scheme for wireless sensor networks.
Yim, Sung-Jib; Choi, Yoon-Hwa
2010-01-01
In this paper, we present an adaptive fault-tolerant event detection scheme for wireless sensor networks. Each sensor node detects an event locally in a distributed manner by using the sensor readings of its neighboring nodes. Confidence levels of sensor nodes are used to dynamically adjust the threshold for decision making, resulting in consistent performance even with increasing number of faulty nodes. In addition, the scheme employs a moving average filter to tolerate most transient faults in sensor readings, reducing the effective fault probability. Only three bits of data are exchanged to reduce the communication overhead in detecting events. Simulation results show that event detection accuracy and false alarm rate are kept very high and low, respectively, even in the case where 50% of the sensor nodes are faulty.
Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.
Hu, Sudeng; Wang, Hanli; Kwong, Sam
2012-04-01
In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.
Designing Adaptive Low-Dissipative High Order Schemes for Long-Time Integrations. Chapter 1
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sjoegreen, B.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Astrophysics Data System (ADS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Analysis of adaptive walks on NK fitness landscapes with different interaction schemes
NASA Astrophysics Data System (ADS)
Nowak, Stefan; Krug, Joachim
2015-06-01
Fitness landscapes are genotype to fitness mappings commonly used in evolutionary biology and computer science which are closely related to spin glass models. In this paper, we study the NK model for fitness landscapes where the interaction scheme between genes can be explicitly defined. The focus is on how this scheme influences the overall shape of the landscape. Our main tool for the analysis are adaptive walks, an idealized dynamics by which the population moves uphill in fitness and terminates at a local fitness maximum. We use three different types of walks and investigate how their length (the number of steps required to reach a local peak) and height (the fitness at the endpoint of the walk) depend on the dimensionality and structure of the landscape. We find that the distribution of local maxima over the landscape is particularly sensitive to the choice of interaction pattern. Most quantities that we measure are simply correlated to the rank of the scheme, which is equal to the number of nonzero coefficients in the expansion of the fitness landscape in terms of Walsh functions.
An adaptive window-setting scheme for segmentation of bladder tumor surface via MR cystography.
Duan, Chaijie; Yuan, Kehong; Liu, Fanghua; Xiao, Ping; Lv, Guoqing; Liang, Zhengrong
2012-07-01
This paper proposes an adaptive window-setting scheme for noninvasive detection and segmentation of bladder tumor surface in T(1)-weighted magnetic resonance (MR) images. The inner border of the bladder wall is first covered by a group of ball-shaped detecting windows with different radii. By extracting the candidate tumor windows and excluding the false positive (FP) candidates, the entire bladder tumor surface is detected and segmented by the remaining windows. Different from previous bladder tumor detection methods that are mostly focusing on the existence of a tumor, this paper emphasizes segmenting the entire tumor surface in addition to detecting the presence of the tumor. The presented scheme was validated by ten clinical T(1)-weighted MR image datasets (five volunteers and five patients). The bladder tumor surfaces and the normal bladder wall inner borders in the ten datasets were covered by 223 and 10,491 windows, respectively. Such a large number of the detecting windows makes the validation statistically meaningful. In the FP reduction step, the best feature combination was obtained by using receiver operating characteristics or ROC analysis. The validation results demonstrated the potential of this presented scheme in segmenting the entire tumor surface with high sensitivity and low FP rate. This study inherits our previous results of automatic segmentation of the bladder wall and will be an important element in our MR-based virtual cystoscopy or MR cystography system.
Dynamic adaptive chemistry with operator splitting schemes for reactive flow simulations
NASA Astrophysics Data System (ADS)
Ren, Zhuyin; Xu, Chao; Lu, Tianfeng; Singer, Michael A.
2014-04-01
A numerical technique that uses dynamic adaptive chemistry (DAC) with operator splitting schemes to solve the equations governing reactive flows is developed and demonstrated. Strang-based splitting schemes are used to separate the governing equations into transport fractional substeps and chemical reaction fractional substeps. The DAC method expedites the numerical integration of reaction fractional substeps by using locally valid skeletal mechanisms that are obtained using the directed relation graph (DRG) reduction method to eliminate unimportant species and reactions from the full mechanism. Second-order temporal accuracy of the Strang-based splitting schemes with DAC is demonstrated on one-dimensional, unsteady, freely-propagating, premixed methane/air laminar flames with detailed chemical kinetics and realistic transport. The use of DAC dramatically reduces the CPU time required to perform the simulation, and there is minimal impact on solution accuracy. It is shown that with DAC the starting species and resulting skeletal mechanisms strongly depend on the local composition in the flames. In addition, the number of retained species may be significant only near the flame front region where chemical reactions are significant. For the one-dimensional methane/air flame considered, speed-up factors of three and five are achieved over the entire simulation for GRI-Mech 3.0 and USC-Mech II, respectively. Greater speed-up factors are expected for larger chemical kinetics mechanisms.
Scheduling and adaptation of London's future water supply and demand schemes under uncertainty
NASA Astrophysics Data System (ADS)
Huskova, Ivana; Matrosov, Evgenii S.; Harou, Julien J.; Kasprzyk, Joseph R.; Reed, Patrick M.
2015-04-01
The changing needs of society and the uncertainty of future conditions complicate the planning of future water infrastructure and its operating policies. These systems must meet the multi-sector demands of a range of stakeholders whose objectives often conflict. Understanding these conflicts requires exploring many alternative plans to identify possible compromise solutions and important system trade-offs. The uncertainties associated with future conditions such as climate change and population growth challenge the decision making process. Ideally planners should consider portfolios of supply and demand management schemes represented as dynamic trajectories over time able to adapt to the changing environment whilst considering many system goals and plausible futures. Decisions can be scheduled and adapted over the planning period to minimize the present cost of portfolios while maintaining the supply-demand balance and ecosystem services as the future unfolds. Yet such plans are difficult to identify due to the large number of alternative plans to choose from, the uncertainty of future conditions and the computational complexity of such problems. Our study optimizes London's future water supply system investments as well as their scheduling and adaptation over time using many-objective scenario optimization, an efficient water resource system simulator, and visual analytics for exploring key system trade-offs. The solutions are compared to Pareto approximate portfolios obtained from previous work where the composition of infrastructure portfolios that did not change over the planning period. We explore how the visual analysis of solutions can aid decision making by investigating the implied performance trade-offs and how the individual schemes and their trajectories present in the Pareto approximate portfolios affect the system's behaviour. By doing so decision makers are given the opportunity to decide the balance between many system goals a posteriori as well as
NASA Astrophysics Data System (ADS)
Pathak, Harshavardhana S.; Shukla, Ratnesh K.
2016-08-01
A high-order adaptive finite-volume method is presented for simulating inviscid compressible flows on time-dependent redistributed grids. The method achieves dynamic adaptation through a combination of time-dependent mesh node clustering in regions characterized by strong solution gradients and an optimal selection of the order of accuracy and the associated reconstruction stencil in a conservative finite-volume framework. This combined approach maximizes spatial resolution in discontinuous regions that require low-order approximations for oscillation-free shock capturing. Over smooth regions, high-order discretization through finite-volume WENO schemes minimizes numerical dissipation and provides excellent resolution of intricate flow features. The method including the moving mesh equations and the compressible flow solver is formulated entirely on a transformed time-independent computational domain discretized using a simple uniform Cartesian mesh. Approximations for the metric terms that enforce discrete geometric conservation law while preserving the fourth-order accuracy of the two-point Gaussian quadrature rule are developed. Spurious Cartesian grid induced shock instabilities such as carbuncles that feature in a local one-dimensional contact capturing treatment along the cell face normals are effectively eliminated through upwind flux calculation using a rotated Hartex-Lax-van Leer contact resolving (HLLC) approximate Riemann solver for the Euler equations in generalized coordinates. Numerical experiments with the fifth and ninth-order WENO reconstructions at the two-point Gaussian quadrature nodes, over a range of challenging test cases, indicate that the redistributed mesh effectively adapts to the dynamic flow gradients thereby improving the solution accuracy substantially even when the initial starting mesh is non-adaptive. The high adaptivity combined with the fifth and especially the ninth-order WENO reconstruction allows remarkably sharp capture of
Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît
2016-01-01
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
Adapting parcellation schemes to study fetal brain connectivity in serial imaging studies.
Cheng, Xi; Wilm, Jakob; Seshamani, Sharmishtaa; Fogtmann, Mads; Kroenke, Christopher; Studholme, Colin
2013-01-01
A crucial step in studying brain connectivity is the definition of the Regions Of Interest (ROI's) which are considered as nodes of a network graph. These ROI's identified in structural imaging reflect consistent functional regions in the anatomies being compared. However in serial studies of the developing fetal brain such functional and associated structural markers are not consistently present over time. In this study we adapt two non-atlas based parcellation schemes to study the development of connectivity networks of a fetal monkey brain using Diffusion Weighted Imaging techniques. Results demonstrate that the fetal brain network exhibits small-world characteristics and a pattern of increased cluster coefficients and decreased global efficiency. These findings may provide a route to creating a new biomarker for healthy fetal brain development.
Design of signal-adapted multidimensional lifting scheme for lossy coding.
Gouze, Annabelle; Antonini, Marc; Barlaud, Michel; Macq, Benoît
2004-12-01
This paper proposes a new method for the design of lifting filters to compute a multidimensional nonseparable wavelet transform. Our approach is stated in the general case, and is illustrated for the 2-D separable and for the quincunx images. Results are shown for the JPEG2000 database and for satellite images acquired on a quincunx sampling grid. The design of efficient quincunx filters is a difficult challenge which has already been addressed for specific cases. Our approach enables the design of less expensive filters adapted to the signal statistics to enhance the compression efficiency in a more general case. It is based on a two-step lifting scheme and joins the lifting theory with Wiener's optimization. The prediction step is designed in order to minimize the variance of the signal, and the update step is designed in order to minimize a reconstruction error. Application for lossy compression shows the performances of the method.
NASA Astrophysics Data System (ADS)
Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny
2015-03-01
We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.
Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny
2015-03-01
We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.
Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny
2015-01-01
We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3–4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations. PMID:25641983
A general hybrid radiation transport scheme for star formation simulations on an adaptive grid
Klassen, Mikhail; Pudritz, Ralph E.; Kuiper, Rolf; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars
2014-12-10
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
Adaptive Numerical Dissipation Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2005-01-01
The required type and amount of numerical dissipation/filter to accurately resolve all relevant multiscales of complex MHD unsteady high-speed shock/shear/turbulence/combustion problems are not only physical problem dependent, but also vary from one flow region to another. In addition, proper and efficient control of the divergence of the magnetic field (Div(B)) numerical error for high order shock-capturing methods poses extra requirements for the considered type of CPU intensive computations. The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multiresolution wavelets (WAV) (for the above types of flow feature). These filters also provide a natural and efficient way for the minimization of Div(B) numerical error.
A General Hybrid Radiation Transport Scheme for Star Formation Simulations on an Adaptive Grid
NASA Astrophysics Data System (ADS)
Klassen, Mikhail; Kuiper, Rolf; Pudritz, Ralph E.; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars
2014-12-01
Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.
NASA Technical Reports Server (NTRS)
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a
NASA Astrophysics Data System (ADS)
Benaskeur, Abder R.; Roy, Jean
2001-08-01
Sensor Management (SM) has to do with how to best manage, coordinate and organize the use of sensing resources in a manner that synergistically improves the process of data fusion. Based on the contextual information, SM develops options for collecting further information, allocates and directs the sensors towards the achievement of the mission goals and/or tunes the parameters for the realtime improvement of the effectiveness of the sensing process. Conscious of the important role that SM has to play in modern data fusion systems, we are currently studying advanced SM Concepts that would help increase the survivability of the current Halifax and Iroquois Class ships, as well as their possible future upgrades. For this purpose, a hierarchical scheme has been proposed for data fusion and resource management adaptation, based on the control theory and within the process refinement paradigm of the JDL data fusion model, and taking into account the multi-agent model put forward by the SASS Group for the situation analysis process. The novelty of this work lies in the unified framework that has been defined for tackling the adaptation of both the fusion process and the sensor/weapon management.
NASA Astrophysics Data System (ADS)
Chen, Xianshun; Feng, Liang; Ong, Yew Soon
2012-07-01
In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.
Load adaptive start-up scheme for synchronous boost DC-DC converter
NASA Astrophysics Data System (ADS)
Guoding, Dai; Wenliang, Xiu; Yuezhi, Liu; Yawei, Qi; Zuqi, Dong
2016-10-01
This paper presents a load adaptive soft-start scheme through which the inductor current of the synchronous boost DC-DC converter can trace the load current at the start-up stage. This scheme effectively eliminates the inrush-current and over-shoot voltage and improves the load capability of the converter. According to the output voltage, the start-up process is divided into three phases and at each phase the inductor current is limited to match the load. In the pre-charge phase, a step-increasing constant current gives a smooth rise of the output voltage which avoids inrush current and ensures the converter successfully starts up at different load situations. An additional ring oscillator operation phase enables the converter to start up as low as 1.4 V. When the converter enters into the system loop soft-start phase, an output voltage and inductor current detection methods make the transition of the phases smooth and the inductor current and output voltage rise steadily. Effective protection circuits such as short-circuit protection, current limit circuit and over-temperature protection circuit are designed to guarantee the safety and reliability of the chip during the start-up process. The proposed start-up circuit is implemented in a synchronous boost DC-DC converter based on TSMC 0.35 μm CMOS process with an input voltage range 1.4-4.2 V, and a steady output voltage 5 V, and the switching frequency is 1 MHz. Simulation results show that inrush current and overshoot voltage are suppressed with a load range from 0-2.1 A, and inductor current is as low as 259 mA when the output shorts to the ground.
NASA Astrophysics Data System (ADS)
He, Jing; Li, Teng; Wen, Xuejie; Deng, Rui; Chen, Ming; Chen, Lin
2016-01-01
To overcome the unbalanced error bit distribution among subcarriers caused by inter-subcarriers mixing interference (ISMI) and frequency selective fading (FSF), an adaptive modulation scheme based on 64/16/4QAM modulation is proposed and experimentally investigated in the intensity-modulation direct-detection (IM/DD) multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over fiber system. After 50 km standard single mode fiber (SSMF) transmission, at the bit error ratio (BER) of 1×10-3, the experimental results show that the power penalty of the IM/DD MB-OFDM UWBoF system with 64/16/4QAM adaptive modulation scheme is about 3.6 dB, compared to that with the 64QAM modulation scheme. Moreover, the receiver sensitivity has been improved about 0.52 dB when the intra-symbol frequency-domain averaging (ISFA) algorithm is employed in the IM/DD MB-OFDM UWBoF system based on the 64/16/4QAM adaptive modulation scheme. Meanwhile, after 50 km SSMF transmission, there is a negligible power penalty in the adaptively modulated IM/DD MB-OFDM UWBoF system, compared to the optical back-to-back case.
Convergence of Godunov-Type Schemes for Scalar Conservation Laws Under Large Time Steps
2006-01-01
f(u) = 0, in Rm × [0, T ] u(x, 0) = u0(x), in R m (1.1) where u0(x) ∈ BV, the space of functions with bounded variation . We do not consider boundary...entropy solution operator S∆t of (1.1) [10]. Also for a bounded variation function w, it is easy to verify that ‖w − A(w)‖L1 ≤ C∆xTV (w) (2.6) 7 In...it is of bounded variation . The proof for the multi-dimensional case is similar. We can now proceed to prove the desired convergence en = ‖u(·, tn
An Indirect Adaptive Control Scheme in the Presence of Actuator and Sensor Failures
NASA Technical Reports Server (NTRS)
Sun, Joy Z.; Josh, Suresh M.
2009-01-01
The problem of controlling a system in the presence of unknown actuator and sensor faults is addressed. The system is assumed to have groups of actuators, and groups of sensors, with each group consisting of multiple redundant similar actuators or sensors. The types of actuator faults considered consist of unknown actuators stuck in unknown positions, as well as reduced actuator effectiveness. The sensor faults considered include unknown biases and outages. The approach employed for fault detection and estimation consists of a bank of Kalman filters based on multiple models, and subsequent control reconfiguration to mitigate the effect of biases caused by failed components as well as to obtain stability and satisfactory performance using the remaining actuators and sensors. Conditions for fault identifiability are presented, and the adaptive scheme is applied to an aircraft flight control example in the presence of actuator failures. Simulation results demonstrate that the method can rapidly and accurately detect faults and estimate the fault values, thus enabling safe operation and acceptable performance in spite of failures.
Adaptive search range adjustment scheme for fast motion estimation in AVC/H.264
NASA Astrophysics Data System (ADS)
Lee, Sunyoung; Choi, Kiho; Jang, Euee S.
2011-06-01
AVC/H.264 supports the use of multiple reference frames (e.g., 5 frames in AVC/H.264) for motion estimation (ME), which demands a huge computational complexity in ME. We propose an adaptive search range adjustment scheme to reduce the computational complexity of ME by reducing the search range of each reference frame--from the (t-1)'th frame to the (t-5)'th frame--for each macroblock. Based on the statistical analysis that the 16×16 mode type is dominantly selected rather than the other block partition mode types, the proposed method reduces the search range of the remaining ME process in the given reference frame according to the motion vector (MV) position of the 16×16 block ME. In the case of the (t-1)'th frame, the MV position of the 8×8 block ME--in addition to that of 16×16 block ME--is also used for the search range reduction to sub-block partition mode types of the 8×8 block. The experimental results show that the proposed method reduces about 50% and 65% of the total encoding time over CIF/SIF and full HD test sequences, respectively, without any noticeable visual degradation, compared to the full search method of the AVC/H.264 encoder.
Thermodynamics and kinetics of large-time-step molecular dynamics.
Rao, Francesco; Spichty, Martin
2012-02-15
Molecular dynamics (MD) simulations provide essential information about the thermodynamics and kinetics of proteins. Technological advances in both hardware and algorithms have seen this method accessing timescales that used to be unreachable only few years ago. The quest to simulate slow, biologically relevant macromolecular conformational changes, is still open. Here, we present an approximate approach to increase the speed of MD simulations by a factor of ∼4.5. This is achieved by using a large integration time step of 7 fs, in combination with frozen covalent bonds and look-up tables for nonbonded interactions of the solvent. Extensive atomistic MD simulations for a flexible peptide in water show that the approach reproduces the peptide's equilibrium conformational changes, preserving the essential properties of both thermodynamics and kinetics. Comparison of this approximate method with state-of-the-art implicit solvation simulations indicates that the former provides a better description of the underlying free-energy surface. Finally, simulations of a 33-residue peptide show that these fast MD settings are readily applicable to investigate biologically relevant systems.
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-01-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-04-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.
Operational flood control of a low-lying delta system using large time step Model Predictive Control
NASA Astrophysics Data System (ADS)
Tian, Xin; van Overloop, Peter-Jules; Negenborn, Rudy R.; van de Giesen, Nick
2015-01-01
The safety of low-lying deltas is threatened not only by riverine flooding but by storm-induced coastal flooding as well. For the purpose of flood control, these deltas are mostly protected in a man-made environment, where dikes, dams and other adjustable infrastructures, such as gates, barriers and pumps are widely constructed. Instead of always reinforcing and heightening these structures, it is worth considering making the most of the existing infrastructure to reduce the damage and manage the delta in an operational and overall way. In this study, an advanced real-time control approach, Model Predictive Control, is proposed to operate these structures in the Dutch delta system (the Rhine-Meuse delta). The application covers non-linearity in the dynamic behavior of the water system and the structures. To deal with the non-linearity, a linearization scheme is applied which directly uses the gate height instead of the structure flow as the control variable. Given the fact that MPC needs to compute control actions in real-time, we address issues regarding computational time. A new large time step scheme is proposed in order to save computation time, in which different control variables can have different control time steps. Simulation experiments demonstrate that Model Predictive Control with the large time step setting is able to control a delta system better and much more efficiently than the conventional operational schemes.
A massively parallel adaptive scheme for melt migration in geodynamics computations
NASA Astrophysics Data System (ADS)
Dannberg, Juliane; Heister, Timo; Grove, Ryan
2016-04-01
Melt generation and migration are important processes for the evolution of the Earth's interior and impact the global convection of the mantle. While they have been the subject of numerous investigations, the typical time and length-scales of melt transport are vastly different from global mantle convection, which determines where melt is generated. This makes it difficult to study mantle convection and melt migration in a unified framework. In addition, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. We describe our extension of the community mantle convection code ASPECT that adds equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects, and it incorporates the individual compressibilities of the solid and the fluid phase. For this, we derive an accurate and stable Finite Element scheme that can be combined with adaptive mesh refinement. This is particularly advantageous for this type of problem, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high resolution, 3d, compressible, global mantle convection simulations coupled with melt migration. Furthermore, scalable iterative linear solvers are required to solve the large linear systems arising from the discretized system. Finally, we present benchmarks and scaling tests of our solver up to tens of thousands of cores, show the effectiveness of adaptive mesh refinement when applied to melt migration and compare the
NASA Astrophysics Data System (ADS)
Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.
2013-12-01
equation for the distribution of k is solved, provided that Cauchy data are appropriately assigned. In the next stage, only a limited number of passive measurements are provided. In this case, the forward and inverse PDEs are solved simultaneously. This is accomplished by adding regularization terms and filtering the pressure gradients in the inverse problem. Both the forward and the inverse problem are either simultaneously or sequentially coupled and solved using implicit schemes, adaptive mesh refinement, Galerkin finite elements. The final case arises when P, k, and Q data only exist at producing wells. This exceedingly ill posed problem calls for additional constraints on the forward-inverse coupling to insure that the production rates are satisfied at the desired locations. Results from all three cases are presented demonstrating stability and accuracy of the proposed approach and, more importantly, providing some insights into the consequences of data under sampling, uncertainty propagation and quantification. We illustrate the advantages of this novel approach over the common UQ forward drivers on several subsurface energy problems in either porous or fractured or/and faulted reservoirs. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Gharamti, M. E.; Valstar, J.; Hoteit, I.
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system’s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme.
Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario
NASA Astrophysics Data System (ADS)
Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.
2009-12-01
Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.
A simple method for improving the time-stepping accuracy in atmosphere and ocean models
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-12-01
In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A
A Posteriori Error Estimation of Adaptive Finite Difference Schemes for Hyperbolic Systems
1988-06-01
scheme have been studied by Ciment (ref 24), Fritts (ref 25), Hoffman (ref 26), Osher.- and Sanders (ref 27), Sanders (ref 28), and Mastin (ref 29...Methods for Partial Differential Equations, SIAM, Philadelphia, 1983. 24. Ciment , M., "Stable Difference Schemes With Uneven Mesh Spacings," Math. Comp
Raul, Pramod R; Pagilla, Prabhakar R
2015-05-01
In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed.
NASA Astrophysics Data System (ADS)
Mulder, W. A.; Zhebel, E.; Minisini, S.
2014-02-01
We analyse the time-stepping stability for the 3-D acoustic wave equation, discretized on tetrahedral meshes. Two types of methods are considered: mass-lumped continuous finite elements and the symmetric interior-penalty discontinuous Galerkin method. Combining the spatial discretization with the leap-frog time-stepping scheme, which is second-order accurate and conditionally stable, leads to a fully explicit scheme. We provide estimates of its stability limit for simple cases, namely, the reference element with Neumann boundary conditions, its distorted version of arbitrary shape, the unit cube that can be partitioned into six tetrahedra with periodic boundary conditions and its distortions. The Courant-Friedrichs-Lewy stability limit contains an element diameter for which we considered different options. The one based on the sum of the eigenvalues of the spatial operator for the first-degree mass-lumped element gives the best results. It resembles the diameter of the inscribed sphere but is slightly easier to compute. The stability estimates show that the mass-lumped continuous and the discontinuous Galerkin finite elements of degree 2 have comparable stability conditions, whereas the mass-lumped elements of degree one and three allow for larger time steps.
NASA Astrophysics Data System (ADS)
Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan
2016-09-01
This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.
Effects of the computational time step on numerical solutions for turbulent flow
NASA Technical Reports Server (NTRS)
Choi, Haecheon; Moin, Parviz
1994-01-01
Effects of large computational time steps on the computed turbulence were investigated using a fully implicit method. In turbulent channel flow computations the largest computational time step in wall units which led to accurate prediction of turbulence statistics was determined. Turbulence fluctuations could not be sustained if the computational time step was near or larger than the Kolmogorov time scale.
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1984-01-01
A prime obstacle to the widespread use of adaptive control is the degradation of performance and possible instability resulting from the presence of unmodeled dynamics. The approach taken is to explicitly include the unstructured model uncertainty in the output error identification algorithm. The order of the compensator is successively increased by including identified modes. During this model building stage, heuristic rules are used to test for convergence prior to designing compensators. Additionally, the recursive identification algorithm as extended to multi-input, multi-output systems. Enhancements were also made to reduce the computational burden of an algorithm for obtaining minimal state space realizations from the inexact, multivariate transfer functions which result from the identification process. A number of potential adaptive control applications for this approach are illustrated using computer simulations. Results indicated that when speed of adaptation and plant stability are not critical, the proposed schemes converge to enhance system performance.
NASA Astrophysics Data System (ADS)
Ushaq, Muhammad; Fang, Jiancheng
2013-10-01
Integrated navigation systems for various applications, generally employs the centralized Kalman filter (CKF) wherein all measured sensor data are communicated to a single central Kalman filter. The advantage of CKF is that there is a minimal loss of information and high precision under benign conditions. But CKF may suffer computational overloading, and poor fault tolerance. The alternative is the federated Kalman filter (FKF) wherein the local estimates can deliver optimal or suboptimal state estimate as per certain information fusion criterion. FKF has enhanced throughput and multiple level fault detection capability. The Standard CKF or FKF require that the system noise and the measurement noise are zero-mean and Gaussian. Moreover it is assumed that covariance of system and measurement noises remain constant. But if the theoretical and actual statistical features employed in Kalman filter are not compatible, the Kalman filter does not render satisfactory solutions and divergence problems also occur. To resolve such problems, in this paper, an adaptive Kalman filter scheme strengthened with fuzzy inference system (FIS) is employed to adapt the statistical features of contributing sensors, online, in the light of real system dynamics and varying measurement noises. The excessive faults are detected and isolated by employing Chi Square test method. As a case study, the presented scheme has been implemented on Strapdown Inertial Navigation System (SINS) integrated with the Celestial Navigation System (CNS), GPS and Doppler radar using FKF. Collectively the overall system can be termed as SINS/CNS/GPS/Doppler integrated navigation system. The simulation results have validated the effectiveness of the presented scheme with significantly enhanced precision, reliability and fault tolerance. Effectiveness of the scheme has been tested against simulated abnormal errors/noises during different time segments of flight. It is believed that the presented scheme can be
NASA Astrophysics Data System (ADS)
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-01
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-15
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very
Zhang, Jie; Ni, Ming-Jiu
2014-01-01
The numerical simulation of Magnetohydrodynamics (MHD) flows with complex boundaries has been a topic of great interest in the development of a fusion reactor blanket for the difficulty to accurately simulate the Hartmann layers and side layers along arbitrary geometries. An adaptive version of a consistent and conservative scheme has been developed for simulating the MHD flows. Besides, the present study forms the first attempt to apply the cut-cell approach for irregular wall-bounded MHD flows, which is more flexible and conveniently implemented under adaptive mesh refinement (AMR) technique. It employs a Volume-of-Fluid (VOF) approach to represent the fluid–conducting wall interface that makes it possible to solve the fluid–solid coupling magnetic problems, emphasizing at how electric field solver is implemented when conductivity is discontinuous in cut-cell. For the irregular cut-cells, the conservative interpolation technique is applied to calculate the Lorentz force at cell-center. On the other hand, it will be shown how consistent and conservative scheme is implemented on fine/coarse mesh boundaries when using AMR technique. Then, the applied numerical schemes are validated by five test simulations and excellent agreement was obtained for all the cases considered, simultaneously showed good consistency and conservative properties.
Adaptive
Kim, Jinho; Seok, Jul-Ki; Muljadi, Eduard; Kang, Yong Cheol
2016-05-01
Wind generators within a wind power plant (WPP) will produce different amounts of active power because of the wake effect, and therefore, they have different reactive power capabilities. This paper proposes an adaptive reactive power to the voltage (Q-V) scheme for the voltage control of a doubly fed induction generator (DFIG)-based WPP. In the proposed scheme, the WPP controller uses a voltage control mode and sends a voltage error signal to each DFIG. The DFIG controller also employs a voltage control mode utilizing the adaptive Q-V characteristics depending on the reactive power capability such that a DFIG with a larger reactive power capability will inject more reactive power to ensure fast voltage recovery. Test results indicate that the proposed scheme can recover the voltage within a short time, even for a grid fault with a small short-circuit ratio, by making use of the available reactive power of a WPP and differentiating the reactive power injection in proportion to the reactive power capability. This will, therefore, help to reduce the additional reactive power and ensure fast voltage recovery.
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Lang, Russell
2012-01-01
The present three single-case studies assessed the effectiveness of technology-based programs to help three persons with multiple disabilities exercise adaptive response schemes independently. The response schemes included (a) left and right head movements for a man who kept his head increasingly static on his wheelchair's headrest (Study I), (b)…
Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P
2015-07-01
Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions.
An adaptive modulation scheme for bandwidth-limited meteor-burst channels
NASA Astrophysics Data System (ADS)
Jacobsmeyer, Jay M.
The author investigates the performance of an adaptive information rate technique that is particularly well suited to the bandwidth-limited meteor-burst channel. This technique uses the quadrature amplitude signal sets common to digital radio and is called adaptive QAM. Improvements in throughput that are possible with the proposed approach are examined. The results are pertinent to the use of meteor-burst channels for military applications.
An adaptive critic-based scheme for consensus control of nonlinear multi-agent systems
NASA Astrophysics Data System (ADS)
Heydari, Ali; Balakrishnan, S. N.
2014-12-01
The problem of decentralised consensus control of a network of heterogeneous nonlinear systems is formulated as an optimal tracking problem and a solution is proposed using an approximate dynamic programming based neurocontroller. The neurocontroller training comprises an initial offline training phase and an online re-optimisation phase to account for the fact that the reference signal subject to tracking is not fully known and available ahead of time, i.e., during the offline training phase. As long as the dynamics of the agents are controllable, and the communication graph has a directed spanning tree, this scheme guarantees the synchronisation/consensus even under switching communication topology and directed communication graph. Finally, an aerospace application is selected for the evaluation of the performance of the method. Simulation results demonstrate the potential of the scheme.
A self-adaptive image encryption scheme with half-pixel interchange permutation operation
NASA Astrophysics Data System (ADS)
Ye, Ruisong; Liu, Li; Liao, Minyu; Li, Yafang; Liao, Zikang
2017-01-01
A plain-image dependent image encryption scheme with half-pixel-level swapping permutation strategy is proposed. In the new permutation operation, a pixel-swapping operation between four higher bit-planes and four lower bit-planes is employed to replace the traditional confusion operation, which not only improves the conventional permutation efficiency within the plain-image, but also changes all the pixel gray values. The control parameters of generalized Arnold map applied for the permutation operation are related to the plain-image content and consequently can resist chosen-plaintext and known-plaintext attacks effectively. To enhance the security of the proposed image encryption, one multimodal skew tent map is applied to generate pseudo-random gray value sequence for diffusion operation. Simulations have been carried out thoroughly to demonstrate that the proposed image encryption scheme is highly secure thanks to its large key space and efficient permutation-diffusion operations.
NASA Astrophysics Data System (ADS)
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2009-01-01
We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.
Analysis of Adaptive Control Scheme in IEEE 802.11 and IEEE 802.11e Wireless LANs
NASA Astrophysics Data System (ADS)
Lee, Bih-Hwang; Lai, Hui-Cheng
In order to achieve the prioritized quality of service (QoS) guarantee, the IEEE 802.11e EDCAF (the enhanced distributed channel access function) provides the distinguished services by configuring the different QoS parameters to different access categories (ACs). An admission control scheme is needed to maximize the utilization of wireless channel. Most of papers study throughput improvement by solving the complicated multidimensional Markov-chain model. In this paper, we introduce a back-off model to study the transmission probability of the different arbitration interframe space number (AIFSN) and the minimum contention window size (CWmin). We propose an adaptive control scheme (ACS) to dynamically update AIFSN and CWmin based on the periodical monitoring of current channel status and QoS requirements to achieve the specific service differentiation at access points (AP). This paper provides an effective tuning mechanism for improving QoS in WLAN. Analytical and simulation results show that the proposed scheme outperforms the basic EDCAF in terms of throughput and service differentiation especially at high collision rate.
NASA Astrophysics Data System (ADS)
Tritschler, V. K.; Hu, X. Y.; Hickel, S.; Adams, N. A.
2013-07-01
Two-dimensional simulations of the single-mode Richtmyer-Meshkov instability (RMI) are conducted and compared to experimental results of Jacobs and Krivets (2005 Phys. Fluids 17 034105). The employed adaptive central-upwind sixth-order weighted essentially non-oscillatory (WENO) scheme (Hu et al 2010 J. Comput. Phys. 229 8952-65) introduces only very small numerical dissipation while preserving the good shock-capturing properties of other standard WENO schemes. Hence, it is well suited for simulations with both small-scale features and strong gradients. A generalized Roe average is proposed to make the multicomponent model of Shyue (1998 J. Comput. Phys. 142 208-42) suitable for high-order accurate reconstruction schemes. A first sequence of single-fluid simulations is conducted and compared to the experiment. We find that the WENO-CU6 method better resolves small-scale structures, leading to earlier symmetry breaking and increased mixing. The first simulation, however, fails to correctly predict the global characteristic structures of the RMI. This is due to a mismatch of the post-shock parameters in single-fluid simulations when the pre-shock states are matched with the experiment. When the post-shock parameters are matched, much better agreement with the experimental data is achieved. In a sequence of multifluid simulations, the uncertainty in the density gradient associated with transition between the fluids is assessed. Thereby the multifluid simulations show a considerable improvement over the single-fluid simulations.
Lee, Ji Min; Park, Sung Hwan; Kim, Jong Shik
2013-01-01
A robust control scheme is proposed for the position control of the electrohydrostatic actuator (EHA) when considering hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. To reduce overshoot due to a saturation of electric motor and to realize robustness against load disturbance and lumped system uncertainties such as varying parameters and modeling error, this paper proposes an adaptive antiwindup PID sliding mode scheme as a robust position controller for the EHA system. An optimal PID controller and an optimal anti-windup PID controller are also designed to compare control performance. An EHA prototype is developed, carrying out system modeling and parameter identification in designing the position controller. The simply identified linear model serves as the basis for the design of the position controllers, while the robustness of the control systems is compared by experiments. The adaptive anti-windup PID sliding mode controller has been found to have the desired performance and become robust against hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities.
Stevens, D.E.; Bretherton, S.
1996-12-01
This paper presents a new forward-in-time advection method for nearly incompressible flow, MU, and its application to an adaptive multilevel flow solver for atmospheric flows. MU is a modification of Leonard et al.`s UTOPIA scheme. MU, like UTOPIA, is based on third-order accurate semi-Lagrangian multidimensional upwinding for constant velocity flows. for varying velocity fields, MU is a second-order conservative method. MU has greater stability and accuracy than UTOPIA and naturally decomposes into a monotone low-order method and a higher-order accurate correction for use with flux limiting. Its stability and accuracy make it a computationally efficient alternative to current finite-difference advection methods. We present a fully second-order accurate flow solver for the anelastic equations, a prototypical low Mach number flow. The flow solver is based on MU which is used for both momentum and scalar transport equations. This flow solver can also be implemented with any forward-in-time advection scheme. The multilevel flow solver conserves discrete global integrals of advected quantities and includes adaptive mesh refinements. Its second-order accuracy is verified using a nonlinear energy conservation integral for the anelastic equations. For a typical geophysical problem in which the flow is most rapidly varying in a small part of the domain, the multilevel flow solver achieves global accuracy comparable to uniform-resolution simulation for 10% of the computational cost. 36 refs., 10 figs.
Lee, Ji Min; Park, Sung Hwan; Kim, Jong Shik
2013-01-01
A robust control scheme is proposed for the position control of the electrohydrostatic actuator (EHA) when considering hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. To reduce overshoot due to a saturation of electric motor and to realize robustness against load disturbance and lumped system uncertainties such as varying parameters and modeling error, this paper proposes an adaptive antiwindup PID sliding mode scheme as a robust position controller for the EHA system. An optimal PID controller and an optimal anti-windup PID controller are also designed to compare control performance. An EHA prototype is developed, carrying out system modeling and parameter identification in designing the position controller. The simply identified linear model serves as the basis for the design of the position controllers, while the robustness of the control systems is compared by experiments. The adaptive anti-windup PID sliding mode controller has been found to have the desired performance and become robust against hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. PMID:23983640
NASA Astrophysics Data System (ADS)
Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang
2015-10-01
For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.
A novel data adaptive detection scheme for distributed fiber optic acoustic sensing
NASA Astrophysics Data System (ADS)
Ölçer, Íbrahim; Öncü, Ahmet
2016-05-01
We introduce a new approach for distributed fiber optic sensing based on adaptive processing of phase sensitive optical time domain reflectometry (Φ-OTDR) signals. Instead of conventional methods which utilizes frame averaging of detected signal traces, our adaptive algorithm senses a set of noise parameters to enhance the signal-to-noise ratio (SNR) for improved detection performance. This data set is called the secondary data set from which a weight vector for the detection of a signal is computed. The signal presence is sought in the primary data set. This adaptive technique can be used for vibration detection of health monitoring of various civil structures as well as any other dynamic monitoring requirements such as pipeline and perimeter security applications.
AZEuS: AN ADAPTIVE ZONE EULERIAN SCHEME FOR COMPUTATIONAL MAGNETOHYDRODYNAMICS
Ramsey, Jon P.; Clarke, David A.; Men'shchikov, Alexander B.
2012-03-01
A new adaptive mesh refinement (AMR) version of the ZEUS-3D astrophysical magnetohydrodynamical fluid code, AZEuS, is described. The AMR module in AZEuS has been completely adapted to the staggered mesh that characterizes the ZEUS family of codes on which scalar quantities are zone-centered and vector components are face-centered. In addition, for applications using static grids, it is necessary to use higher-order interpolations for prolongation to minimize the errors caused by waves crossing from a grid of one resolution to another. Finally, solutions to test problems in one, two, and three dimensions in both Cartesian and spherical coordinates are presented.
Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin
2014-03-01
In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method.
NASA Astrophysics Data System (ADS)
Cox, Christopher; Liang, Chunlei; Plesniak, Michael W.
2016-06-01
We report development of a high-order compact flux reconstruction method for solving unsteady incompressible flow on unstructured grids with implicit dual time stepping. The method falls under the class of methods now referred to as flux reconstruction/correction procedure via reconstruction. The governing equations employ Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. An implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation within the context of the high-order flux reconstruction method. This compact high-order method is very suitable for parallel computing and can easily be extended to moving and deforming grids.
NASA Astrophysics Data System (ADS)
Cox, Christopher; Liang, Chunlei; Plesniak, Michael
2015-11-01
This paper reports development of a high-order compact method for solving unsteady incompressible flow on unstructured grids with implicit time stepping. The method falls under the class of methods now referred to as flux reconstruction/correction procedure via reconstruction. The governing equations employ the classical artificial compressibility treatment, where dual time stepping is needed to solve unsteady flow problems. An implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time-stepping scheme. Three-dimensional results computed on many processing elements will be presented. The high-order method is very suitable for parallel computing and can easily be extended to moving and deforming grids. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation within the context of the high-order flux reconstruction method. Financial support provided under the GW Presidential Merit Fellowship.
NASA Astrophysics Data System (ADS)
Murthi, A.; Menon, S.; Sednev, I.
2011-12-01
An inherent difficulty in the ability of global climate models to accurately simulate precipitation lies in the use of a large time step, Δt (usually 30 minutes), to solve the governing equations. Since microphysical processes are characterized by small time scales compared to Δt, finite difference approximations used to advance microphysics equations suffer from numerical instability and large time truncation errors. With this in mind, the sensitivity of precipitation simulated by the atmospheric component of CESM, namely the Community Atmosphere Model (CAM 5.1), to the microphysics time step (τ) is investigated. Model integrations are carried out for a period of five years with a spin up time of about six months for a horizontal resolution of 2.5 × 1.9 degrees and 30 levels in the vertical, with Δt = 1800 s. The control simulation with τ = 900 s is compared with one using τ = 300 s for accumulated precipitation and radi- ation budgets at the surface and top of the atmosphere (TOA), while keeping Δt fixed. Our choice of τ = 300 s is motivated by previous work on warm rain processes wherein it was shown that a value of τ around 300 s was necessary, but not sufficient, to ensure positive definiteness and numerical stability of the explicit time integration scheme used to integrate the microphysical equations. However, since the entire suite of microphysical processes are represented in our case, we suspect that this might impose additional restrictions on τ. The τ = 300 s case produces differences in large-scale accumulated rainfall from the τ = 900 s case by as large as 200 mm, over certain regions of the globe. The spatial patterns of total accumulated precipitation using τ = 300 s are in closer agreement with satellite observed precipitation, when compared to the τ = 900 s case. Differences are also seen in the radiation budget with the τ = 300 (900) s cases producing surpluses that range between 1-3 W/m2 at both the TOA and surface in the global
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-04-25
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.
Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal
2014-01-01
This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938
NASA Astrophysics Data System (ADS)
Murillo, J.; García-Navarro, P.; Brufau, P.; Burguete, J.
2006-01-01
In this work, the explicit first order upwind scheme is presented under a formalism that enables the extension of the methodology to large time steps. The number of cells in the stencil of the numerical scheme is related to the allowable size of the CFL number for numerical stability. It is shown how to increase both at the same time. The basic idea is proposed for a 1D scalar equation and extended to 1D and 2D non-linear systems with source terms. The importance of the kind of grid used is highlighted and the method is outlined for irregular grids. The good quality of the results is illustrated by means of several examples including shallow water flow test cases. The bed slope source terms are involved in the method through an upwind discretization.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
Technology Transfer Automated Retrieval System (TEKTRAN)
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D
Cumberland, R.; Mesina, G.
2009-01-01
The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.
A Muscle Synergy-Inspired Adaptive Control Scheme for a Hybrid Walking Neuroprosthesis.
Alibeji, Naji A; Kirsch, Nicholas Andrew; Sharma, Nitin
2015-01-01
A hybrid neuroprosthesis that uses an electric motor-based wearable exoskeleton and functional electrical stimulation (FES) has a promising potential to restore walking in persons with paraplegia. A hybrid actuation structure introduces effector redundancy, making its automatic control a challenging task because multiple muscles and additional electric motor need to be coordinated. Inspired by the muscle synergy principle, we designed a low dimensional controller to control multiple effectors: FES of multiple muscles and electric motors. The resulting control system may be less complex and easier to control. To obtain the muscle synergy-inspired low dimensional control, a subject-specific gait model was optimized to compute optimal control signals for the multiple effectors. The optimal control signals were then dimensionally reduced by using principal component analysis to extract synergies. Then, an adaptive feedforward controller with an update law for the synergy activation was designed. In addition, feedback control was used to provide stability and robustness to the control design. The adaptive-feedforward and feedback control structure makes the low dimensional controller more robust to disturbances and variations in the model parameters and may help to compensate for other time-varying phenomena (e.g., muscle fatigue). This is proven by using a Lyapunov stability analysis, which yielded semi-global uniformly ultimately bounded tracking. Computer simulations were performed to test the new controller on a 4-degree of freedom gait model.
Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter
NASA Astrophysics Data System (ADS)
Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi
2013-03-01
Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.
A Muscle Synergy-Inspired Adaptive Control Scheme for a Hybrid Walking Neuroprosthesis
Alibeji, Naji A.; Kirsch, Nicholas Andrew; Sharma, Nitin
2015-01-01
A hybrid neuroprosthesis that uses an electric motor-based wearable exoskeleton and functional electrical stimulation (FES) has a promising potential to restore walking in persons with paraplegia. A hybrid actuation structure introduces effector redundancy, making its automatic control a challenging task because multiple muscles and additional electric motor need to be coordinated. Inspired by the muscle synergy principle, we designed a low dimensional controller to control multiple effectors: FES of multiple muscles and electric motors. The resulting control system may be less complex and easier to control. To obtain the muscle synergy-inspired low dimensional control, a subject-specific gait model was optimized to compute optimal control signals for the multiple effectors. The optimal control signals were then dimensionally reduced by using principal component analysis to extract synergies. Then, an adaptive feedforward controller with an update law for the synergy activation was designed. In addition, feedback control was used to provide stability and robustness to the control design. The adaptive-feedforward and feedback control structure makes the low dimensional controller more robust to disturbances and variations in the model parameters and may help to compensate for other time-varying phenomena (e.g., muscle fatigue). This is proven by using a Lyapunov stability analysis, which yielded semi-global uniformly ultimately bounded tracking. Computer simulations were performed to test the new controller on a 4-degree of freedom gait model. PMID:26734606
NASA Astrophysics Data System (ADS)
Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda
2016-03-01
Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.
NASA Astrophysics Data System (ADS)
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying
2014-05-01
A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.
NASA Astrophysics Data System (ADS)
El-Shafai, Walid
2015-03-01
3D video transmission over erroneous networks is still a considerable issue due to restricted resources and the presence of severe channel errors. Efficiently compressing 3D video with low transmission rate, while maintaining a high quality of received 3D video, is very challenging. Since it is not plausible to re-transmit all the corrupted macro-blocks (MBs) due to real time applications and limited resources. Thus it is mandatory to retrieve the lost MBs at the decoder side using sufficient post-processing schemes, such as error concealment (EC). In this paper, we propose an adaptive multi-mode EC (AMMEC) algorithm at the decoder based on utilizing pre-processing flexible macro-block ordering error resilience (FMO-ER) technique at the encoder; to efficiently conceal the erroneous MBs of intra and inter coded frames of 3D video. Experimental simulation results show that the proposed FMO-ER/AMMEC schemes can significantly improve the objective and subjective 3D video quality.
NASA Astrophysics Data System (ADS)
Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.
2016-08-01
This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.
Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments
Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao
2009-05-20
Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmark the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.
Omelyan, Igor E-mail: omelyan@icmp.lviv.ua; Kovalenko, Andriy
2013-12-28
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics
NASA Astrophysics Data System (ADS)
Omelyan, Igor; Kovalenko, Andriy
2013-12-01
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics
NASA Technical Reports Server (NTRS)
Yan, T.-Y.; Li, V. O. K.
1984-01-01
This paper describes an Adaptive Mobile Access Protocol (AMAP) for the message service of MSAT-X., a proposed experimental mobile satellite communication network. Message lengths generated by the mobiles are assumed to be uniformly distributed. The mobiles are dispersed over a wide geographical area and the channel data rate is limited. AMAP is a reservation based multiple access scheme. The available bandwidth is divided into subchannels, which are divided into reservation and message channels. The ALOHA multiple access scheme is employed in the reservation channels, while the message channels are demand assigned. AMAP adaptively reallocates the reservation and message channels to optimize the total average message delay.
NASA Astrophysics Data System (ADS)
Masmoudi, Atef; Zouari, Sonia; Ghribi, Abdelaziz
2015-11-01
We propose a new adaptive block-wise lossless image compression algorithm, which is based on the so-called alphabet reduction scheme combined with an adaptive arithmetic coding (AC). This new encoding algorithm is particularly efficient for lossless compression of images with sparse and locally sparse histograms. AC is a very efficient technique for lossless data compression and produces a rate that is close to the entropy; however, a compression performance loss occurs when encoding images or blocks with a limited number of active symbols by comparison with the number of symbols in the nominal alphabet, which consists in the amplification of the zero frequency problem. Generally, most methods add one to the frequency count of each symbol from the nominal alphabet, which leads to a statistical model distortion, and therefore reduces the efficiency of the AC. The aim of this work is to overcome this drawback by assigning to each image block the smallest possible set including all the existing symbols called active symbols. This is an alternative of using the nominal alphabet when applying the conventional arithmetic encoders. We show experimentally that the proposed method outperforms several lossless image compression encoders and standards including the conventional arithmetic encoders, JPEG2000, and JPEG-LS.
The constant displacement scheme for tracking particles in heterogeneous aquifers
Wen, X.H.; Gomez-Hernandez, J.J.
1996-01-01
Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times faster than using the constant time step scheme.
Inference for Optimal Dynamic Treatment Regimes using an Adaptive m-out-of-n Bootstrap Scheme
Chakraborty, Bibhas; Laber, Eric B.; Zhao, Yingqi
2013-01-01
Summary A dynamic treatment regime consists of a set of decision rules that dictate how to individualize treatment to patients based on available treatment and covariate history. A common method for estimating an optimal dynamic treatment regime from data is Q-learning which involves nonsmooth operations of the data. This nonsmoothness causes standard asymptotic approaches for inference like the bootstrap or Taylor series arguments to breakdown if applied without correction. Here, we consider the m-out-of-n bootstrap for constructing confidence intervals for the parameters indexing the optimal dynamic regime. We propose an adaptive choice of m and show that it produces asymptotically correct confidence sets under fixed alternatives. Furthermore, the proposed method has the advantage of being conceptually and computationally much more simple than competing methods possessing this same theoretical property. We provide an extensive simulation study to compare the proposed method with currently available inference procedures. The results suggest that the proposed method delivers nominal coverage while being less conservative than alternatives. The proposed methods are implemented in the qLearn R-package and have been made available on the Comprehensive R-Archive Network (http://cran.r-project.org/). Analysis of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study is used as an illustrative example. PMID:23845276
Region of interest based robust watermarking scheme for adaptation in small displays
NASA Astrophysics Data System (ADS)
Vivekanandhan, Sapthagirivasan; K. B., Kishore Mohan; Vemula, Krishna Manohar
2010-02-01
Now-a-days Multimedia data can be easily replicated and the copyright is not legally protected. Cryptography does not allow the use of digital data in its original form and once the data is decrypted, it is no longer protected. Here we have proposed a new double protected digital image watermarking algorithm, which can embed the watermark image blocks into the adjacent regions of the host image itself based on their blocks similarity coefficient which is robust to various noise effects like Poisson noise, Gaussian noise, Random noise and thereby provide double security from various noises and hackers. As instrumentation application requires a much accurate data, the watermark image which is to be extracted back from the watermarked image must be immune to various noise effects. Our results provide better extracted image compared to the present/existing techniques and in addition we have done resizing the same for various displays. Adaptive resizing for various size displays is being experimented wherein we crop the required information in a frame, zoom it for a large display or resize for a small display using a threshold value and in either cases background is not given much importance but it is only the fore-sight object which gains importance which will surely be helpful in performing surgeries.
Multirate time-stepping least squares shadowing method for unsteady turbulent flow
NASA Astrophysics Data System (ADS)
Bae, Hyunji Jane; Moin, Parviz
2014-11-01
The recently developed least squares shadowing (LSS) method reformulates unsteady turbulent flow simulations to be well-conditioned time domain boundary value problems. The reformulation can enable scalable parallel-in-time simulation of turbulent flows (Wang et al. Phys. Fluid [2013]). A LSS method with multirate time-stepping was implemented to avoid the necessity of taking small global time-steps (restricted by the largest value of the Courant number on the grid) and therefore result in a more efficient algorithm. We will present the results of the multirate time-stepping LSS compared to a single rate time-stepping LSS and discuss the computational savings. Hyunji Jane Bae acknowledges support from the Stanford Graduate Fellowship.
NASA Technical Reports Server (NTRS)
Wood, William A., III
2002-01-01
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.
ERIC Educational Resources Information Center
Kamitsuka, Arthur Jun
This study concentrated on developing a conceptual scheme for adapting participation training, an adult education approach based on democratic concepts and practices, to the Three Love Movement (Love of God, Love of Soil, Love of Man) in Japan. (This Movement is an outgrowth of Protestant folk schools.) While democratization is an aim, the…
Halleroed, Tomas Rylander, Thomas
2008-04-20
A stable hybridization of the finite-element method (FEM) and the finite-difference time-domain (FDTD) scheme for Maxwell's equations with electric and magnetic losses is presented for two-dimensional problems. The hybrid method combines the flexibility of the FEM with the efficiency of the FDTD scheme and it is based directly on Ampere's and Faraday's law. The electric and magnetic losses can be treated implicitly by the FEM on an unstructured mesh, which allows for local mesh refinement in order to resolve rapid variations in the material parameters and/or the electromagnetic field. It is also feasible to handle larger homogeneous regions with losses by the explicit FDTD scheme connected to an implicitly time-stepped and lossy FEM region. The hybrid method shows second-order convergence for smooth scatterers. The bistatic radar cross section (RCS) for a circular metal cylinder with a lossy coating converges to the analytical solution and an accuracy of 2% is achieved for about 20 points per wavelength. The monostatic RCS for an airfoil that features sharp corners yields a lower order of convergence and it is found to agree well with what can be expected for singular fields at the sharp corners. A careful convergence study with resolutions from 20 to 140 points per wavelength provides accurate extrapolated results for this non-trivial test case, which makes it possible to use as a reference problem for scattering codes that model both electric and magnetic losses.
2015-01-01
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts. PMID:24555448
Han, Hao; Li, Lihong; Han, Fangfang; Song, Bowen; Moore, William; Liang, Zhengrong
2014-01-01
Computer-aided detection (CADe) of pulmonary nodules is critical to assisting radiologists in early identification of lung cancer from computed tomography (CT) scans. This paper proposes a novel CADe system based on a hierarchical vector quantization (VQ) scheme. Compared with the commonly-used simple thresholding approach, high-level VQ yields a more accurate segmentation of the lungs from the chest volume. In identifying initial nodule candidates (INCs) within the lungs, low-level VQ proves to be effective for INCs detection and segmentation, as well as computationally efficient compared to existing approaches. False-positive (FP) reduction is conducted via rule-based filtering operations in combination with a feature-based support vector machine classifier. The proposed system was validated on 205 patient cases from the publically available on-line LIDC (Lung Image Database Consortium) database, with each case having at least one juxta-pleural nodule annotation. Experimental results demonstrated that our CADe system obtained an overall sensitivity of 82.7% at a specificity of 4 FPs/scan, and 89.2% sensitivity at 4.14 FPs/scan for the classification of juxta-pleural INCs only. With respect to comparable CADe systems, the proposed system shows outperformance and demonstrates its potential for fast and adaptive detection of pulmonary nodules via CT imaging. PMID:25486657
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, and computational cost is examined and several numerical examples are presented to corroborate the findings.
Boosting the accuracy and speed of quantum Monte Carlo: Size consistency and time step
NASA Astrophysics Data System (ADS)
Zen, Andrea; Sorella, Sandro; Gillan, Michael J.; Michaelides, Angelos; Alfè, Dario
2016-06-01
Diffusion Monte Carlo (DMC) simulations for fermions are becoming the standard for providing high-quality reference data in systems that are too large to be investigated via quantum chemical approaches. DMC with the fixed-node approximation relies on modifications of the Green's function to avoid singularities near the nodal surface of the trial wave function. Here we show that these modifications affect the DMC energies in a way that is not size consistent, resulting in large time-step errors. Building on the modifications of Umrigar et al. and DePasquale et al. we propose a simple Green's function modification that restores size consistency to large values of the time step, which substantially reduces time-step errors. This algorithm also yields remarkable speedups of up to two orders of magnitude in the calculation of molecule-molecule binding energies and crystal cohesive energies, thus extending the horizons of what is possible with DMC.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
NASA Astrophysics Data System (ADS)
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin
2017-01-01
The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a ‘scoring fusion’ artificial neural network classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC = 0.793 ± 0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions.
A GPU-accelerated adaptive discontinuous Galerkin method for level set equation
NASA Astrophysics Data System (ADS)
Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.
2016-01-01
This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.
Error correction in short time steps during the application of quantum gates
Castro, L.A. de Napolitano, R.D.J.
2016-04-15
We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.
NASA Technical Reports Server (NTRS)
Garrett, Bruce C.; Swaminathan, P. K.; Murthy, C. S.; Redmon, Michael J.
1987-01-01
A variable time step algorithm has been implemented for solving the stochastic equations of motion for gas-surface collisions. It has been tested for a simple model of electronically inelastic collisions with an insulator surface in which the phonon manifold acts as a heat bath and electronic states are localized. In addition to reproducing the accurate nuclear dynamics of the surface atoms, numerical calculations have shown the algorithm to yield accurate ensemble averages of physical observables such as electronic transition probabilities and total energy loss of the gas atom to the surface. This new algorithm offers a gain in efficieny of up to an order of magnitude compared to fixed time step integration.
Error and timing analysis of multiple time-step integration methods for molecular dynamics
NASA Astrophysics Data System (ADS)
Han, Guowen; Deng, Yuefan; Glimm, James; Martyna, Glenn
2007-02-01
Molecular dynamics simulations of biomolecules performed using multiple time-step integration methods are hampered by resonance instabilities. We analyze the properties of a simple 1D linear system integrated with the symplectic reference system propagator MTS (r-RESPA) technique following earlier work by others. A closed form expression for the time step dependent Hamiltonian which corresponds to r-RESPA integration of the model is derived. This permits us to present an analytic formula for the dependence of the integration accuracy on short-range force cutoff range. A detailed analysis of the force decomposition for the standard Ewald summation method is then given as the Ewald method is a good candidate to achieve high scaling on modern massively parallel machines. We test the new analysis on a realistic system, a protein in water. Under Langevin dynamics with a weak friction coefficient ( ζ=1 ps) to maintain temperature control and using the SHAKE algorithm to freeze out high frequency vibrations, we show that the 5 fs resonance barrier present when all degrees of freedom are unconstrained is postponed to ≈12 fs. An iso-error boundary with respect to the short-range cutoff range and multiple time step size agrees well with the analytical results which are valid due to dominance of the high frequency modes in determining integrator accuracy. Using r-RESPA to treat the long range interactions results in a 6× increase in efficiency for the decomposition described in the text.
The Semi-implicit Time-stepping Algorithm in MH4D
NASA Astrophysics Data System (ADS)
Vadlamani, Srinath; Shumlak, Uri; Marklin, George; Meier, Eric; Lionello, Roberto
2006-10-01
The Plasma Science and Innovation Center (PSI Center) at the University of Washington is developing MHD codes to accurately model Emerging Concept (EC) devices. Examination of the semi-implicit time stepping algorithm implemented in the tetrahedral mesh MHD simulation code, MH4D, is presented. The time steps for standard explicit methods, which are constrained by the Courant-Friedrichs-Lewy (CFL) condition, are typically small for simulations of EC experiments due to the large Alfven speed. The CFL constraint is more severe with a tetrahedral mesh because of the irregular cell geometry. The semi-implicit algorithm [1] removes the fast waves constraint, thus allowing for larger time steps. We will present the implementation method of this algorithm, and numerical results for test problems in simple geometry. Also, we will present the effectiveness in simulations of complex geometry, similar to the ZaP [2] experiment at the University of Washington. References: [1]Douglas S. Harned and D. D. Schnack, Semi-implicit method for long time scale magnetohy drodynamic computations in three dimensions, JCP, Volume 65, Issue 1, July 1986, Pages 57-70. [2]U. Shumlak, B. A. Nelson, R. P. Golingo, S. L. Jackson, E. A. Crawford, and D. J. Den Hartog, Sheared flow stabilization experiments in the ZaP flow Zpinch, Phys. Plasmas 10, 1683 (2003).
NASA Astrophysics Data System (ADS)
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Qian, Wei; Zheng, Bin
2016-03-01
Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (<= 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
ERIC Educational Resources Information Center
La Malfa, Giampaolo; Lassi, Stefano; Bertelli, Marco; Albertini, Giorgio; Dosen, Anton
2009-01-01
The importance of emotional aspects in developing cognitive and social abilities has already been underlined by many authors even if there is no unanimous agreement on the factors constituting adaptive abilities, nor is there any on the way to measure them or on the relation between adaptive ability and cognitive level. The purposes of this study…
NASA Astrophysics Data System (ADS)
El Gharamti, Mohamad; Valstar, Johan; Hoteit, Ibrahim
2014-05-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Our results suggest that the proposed scheme allows a reduction of around 80% of the ensemble size as compared to the standard EnKF scheme.
Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Lang, Russell
2012-01-01
The present three single-case studies assessed the effectiveness of technology-based programs to help three persons with multiple disabilities exercise adaptive response schemes independently. The response schemes included (a) left and right head movements for a man who kept his head increasingly static on his wheelchair's headrest (Study I), (b) left- and right-arm movements for a woman who tended to hold both arms/hands tight against her body (Study II), and (c) touching object cues on a computer screen for a girl who rarely used her residual vision for orienting/guiding her hand responses. The technology involved microswitches/sensors to detect the response schemes and a computer/control system to record their occurrences and activate preferred stimuli contingent on them. Results showed large increases in the response schemes targeted for each of the three participants during the intervention phases of the studies. The importance of using technology-based programs as tools for enabling persons with profound and multiple disabilities to practice relevant responses independently was discussed.
NASA Astrophysics Data System (ADS)
Yu, Chunxue; Yin, Xin'an; Yang, Zhifeng; Cai, Yanpeng; Sun, Tao
2016-09-01
The time step used in the operation of eco-friendly reservoirs has decreased from monthly to daily, and even sub-daily. The shorter time step is considered a better choice for satisfying downstream environmental requirements because it more closely resembles the natural flow regime. However, little consideration has been given to the influence of different time steps on the ability to simultaneously meet human and environmental flow requirements. To analyze this influence, we used an optimization model to explore the relationships among the time step, environmental flow (e-flow) requirements, and human water needs for a wide range of time steps and e-flow scenarios. We used the degree of hydrologic alteration to evaluate the regime's ability to satisfy the e-flow requirements of riverine ecosystems, and used water supply reliability to evaluate the ability to satisfy human needs. We then applied the model to a case study of China's Tanghe Reservoir. We found four efficient time steps (2, 3, 4, and 5 days), with a remarkably high water supply reliability (around 80%) and a low alteration of the flow regime (<35%). Our analysis of the hydrologic alteration revealed the smallest alteration at time steps ranging from 1 to 7 days. However, longer time steps led to higher water supply reliability to meet human needs under several e-flow scenarios. Our results show that adjusting the time step is a simple way to improve reservoir operation performance to balance human and e-flow needs.
NASA Astrophysics Data System (ADS)
Chen, Haizhou; Wang, Jiaxu; Li, Junyang; Tang, Baoping
2017-03-01
This paper presents a new scheme for rolling bearing fault diagnosis using texture features extracted from the time-frequency representations (TFRs) of the signal. To derive the proposed texture features, firstly adaptive optimal kernel time frequency representation (AOK-TFR) is applied to extract TFRs of the signal which essentially describe the energy distribution characteristics of the signal over time and frequency domain. Since the AOK-TFR uses the signal-dependent radially Gaussian kernel that adapts over time, it can exactly track the minor variations in the signal and provide an excellent time-frequency concentration in noisy environment. Simulation experiments are furthermore performed in comparison with common time-frequency analysis methods under different noisy conditions. Secondly, the uniform local binary pattern (uLBP), which is a computationally simple and noise-resistant texture analysis method, is used to calculate the histograms from the TFRs to characterize rolling bearing fault information. Finally, the obtained histogram feature vectors are input into the multi-SVM classifier for pattern recognition. We validate the effectiveness of the proposed scheme by several experiments, and comparative results demonstrate that the new fault diagnosis technique performs better than most state-of-the-art techniques, and yet we find that the proposed algorithm possess the adaptivity and noise resistance qualities that could be very useful in real industrial applications.
Sensitivity of The High-resolution Wam Model With Respect To Time Step
NASA Astrophysics Data System (ADS)
Kasemets, K.; Soomere, T.
The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave
Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential
Zhang Ying; Liang Haozhao; Meng Jie
2009-08-26
The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus {sup 12}C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.
NASA Astrophysics Data System (ADS)
Antoine, Xavier; Besse, Christophe; Rispoli, Vittorio
2016-12-01
The aim of this paper is to build and validate some explicit high-order schemes, both in space and time, for simulating the dynamics of systems of nonlinear Schrödinger/Gross-Pitaevskii equations. The method is based on the combination of high-order IMplicit-EXplicit (IMEX) schemes in time and Fourier pseudo-spectral approximations in space. The resulting IMEXSP schemes are highly accurate, efficient and easy to implement. They are also robust when used in conjunction with an adaptive time stepping strategy and appear as an interesting alternative to time-splitting pseudo-spectral (TSSP) schemes. Finally, a complete numerical study is developed to investigate the properties of the IMEXSP schemes, in comparison with TSSP schemes, for one- and two-components systems of Gross-Pitaevskii equations.
Kumar, Ravi
2014-01-01
Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel. PMID:24688379
Kumar, Ravi; Saxena, Rajiv
2014-01-01
Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel.
NASA Astrophysics Data System (ADS)
Hoepfer, Matthias
co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.
Finite time step and spatial grid effects in δf simulation of warm plasmas
Sturdevant, Benjamin J.; Parker, Scott E.
2016-01-15
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations using the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.
Finite time step and spatial grid effects in δf simulation of warm plasmas
NASA Astrophysics Data System (ADS)
Sturdevant, Benjamin J.; Parker, Scott E.
2016-01-01
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations using the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.
Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows
NASA Technical Reports Server (NTRS)
Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang
2009-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
Extended particle-in-cell schemes for physics in ultrastrong laser fields: Review and developments.
Gonoskov, A; Bastrakov, S; Efimenko, E; Ilderton, A; Marklund, M; Meyerov, I; Muraviev, A; Sergeev, A; Surmin, I; Wallin, E
2015-08-01
We review common extensions of particle-in-cell (PIC) schemes which account for strong field phenomena in laser-plasma interactions. After describing the physical processes of interest and their numerical implementation, we provide solutions for several associated methodological and algorithmic problems. We propose a modified event generator that precisely models the entire spectrum of incoherent particle emission without any low-energy cutoff, and which imposes close to the weakest possible demands on the numerical time step. Based on this, we also develop an adaptive event generator that subdivides the time step for locally resolving QED events, allowing for efficient simulation of cascades. Further, we present a unified technical interface for including the processes of interest in different PIC implementations. Two PIC codes which support this interface, PICADOR and ELMIS, are also briefly reviewed.
Extended particle-in-cell schemes for physics in ultrastrong laser fields: Review and developments
NASA Astrophysics Data System (ADS)
Gonoskov, A.; Bastrakov, S.; Efimenko, E.; Ilderton, A.; Marklund, M.; Meyerov, I.; Muraviev, A.; Sergeev, A.; Surmin, I.; Wallin, E.
2015-08-01
We review common extensions of particle-in-cell (PIC) schemes which account for strong field phenomena in laser-plasma interactions. After describing the physical processes of interest and their numerical implementation, we provide solutions for several associated methodological and algorithmic problems. We propose a modified event generator that precisely models the entire spectrum of incoherent particle emission without any low-energy cutoff, and which imposes close to the weakest possible demands on the numerical time step. Based on this, we also develop an adaptive event generator that subdivides the time step for locally resolving QED events, allowing for efficient simulation of cascades. Further, we present a unified technical interface for including the processes of interest in different PIC implementations. Two PIC codes which support this interface, picador and elmis, are also briefly reviewed.
NASA Astrophysics Data System (ADS)
Farahvash, Shayan; Akhavan, Koorosh; Kavehrad, Mohsen
1999-12-01
This paper presents a solution to problem of providing bit- error rate performance guarantees in a fixed millimeter-wave wireless system, such as local multi-point distribution system in line-of-sight or nearly line-of-sight applications. The basic concept is to take advantage of slow-fading behavior of fixed wireless channel by changing the transmission code rate. Rate compatible punctured convolutional codes are used to implement adaptive coding. Cochannel interference analysis is carried out for downlink direction; from base station to subscriber premises. Cochannel interference is treated as a noise-like random process with a power equal to the sum of the power from finite number of interfering base stations. Two different cellular architectures based on using single or dual polarizations are investigated. Average spectral efficiency of the proposed adaptive rate system is found to be at least 3 times larger than a fixed rate system with similar outage requirements.
NASA Technical Reports Server (NTRS)
Steger, J. L.; Dougherty, F. C.; Benek, J. A.
1983-01-01
A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui
2016-01-01
Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation.
Owolabi, Kolade M; Patidar, Kailash C
2016-01-01
In this paper, we consider the numerical simulations of an extended nonlinear form of Kierstead-Slobodkin reaction-transport system in one and two dimensions. We employ the popular fourth-order exponential time differencing Runge-Kutta (ETDRK4) schemes proposed by Cox and Matthew (J Comput Phys 176:430-455, 2002), that was modified by Kassam and Trefethen (SIAM J Sci Comput 26:1214-1233, 2005), for the time integration of spatially discretized partial differential equations. We demonstrate the supremacy of ETDRK4 over the existing exponential time differencing integrators that are of standard approaches and provide timings and error comparison. Numerical results obtained in this paper have granted further insight to the question 'What is the minimal size of the spatial domain so that the population persists?' posed by Kierstead and Slobodkin (J Mar Res 12:141-147, 1953), with a conclusive remark that the population size increases with the size of the domain. In attempt to examine the biological wave phenomena of the solutions, we present the numerical results in both one- and two-dimensional space, which have interesting ecological implications. Initial data and parameter values were chosen to mimic some existing patterns.
Construction of low dissipative high-order well-balanced filter schemes for non-equilibrium flows
Wang Wei; Yee, H.C.; Sjoegreen, Bjoern; Magin, Thierry; Shu, Chi-Wang
2011-05-20
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. (2009) to a class of low dissipative high-order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. More general 1D and 2D reacting flow models and new examples of shock turbulence interactions are provided to demonstrate the advantage of well-balanced schemes. The class of filter schemes developed by Yee et al. (1999) , Sjoegreen and Yee (2004) and Yee and Sjoegreen (2007) consist of two steps, a full time step of spatially high-order non-dissipative base scheme and an adaptive non-linear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand-alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e. choosing a well-balanced base scheme with a well-balanced filter (both with high-order accuracy). A typical class of these schemes shown in this paper is the high-order central difference schemes/predictor-corrector (PC) schemes with a high-order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady-state solutions exactly; it is able to capture small perturbations, e.g. turbulence fluctuations; and it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical
Adaptive numerical algorithms in space weather modeling
NASA Astrophysics Data System (ADS)
Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav
2012-02-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit
Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme
NASA Technical Reports Server (NTRS)
Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook
1995-01-01
Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.
NASA Astrophysics Data System (ADS)
Chen, Hung-Ming; Chen, Po-Hung; Lin, Cheng-Tso; Liu, Ching-Chung
2012-11-01
An efficient algorithm named modified directional gradient descent searches to enhance the directional gradient descent search (DGDS) algorithm is presented to reduce computations. A modified search pattern with an adaptive threshold for early termination is applied to DGDS to avoid meaningless calculation after the searching point is good enough. A statistical analysis of best motion vector distribution is analyzed to decide the modified search pattern. Then a statistical model based on the characteristics of the block distortion information of the previous coded frame helps the early termination parameters selection, and a trade-off between the video quality and the computational complexity can be obtained. The simulation results show the proposed algorithm provides significant improvement in reducing the motion estimation (ME) by 17.81% of the average search points and 20% of ME time saving compared to the fast DGDS algorithm implemented in H.264/AVC JM 18.2 reference software according to different types of sequences, while maintaining a similar bit rate without losing picture quality.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great
Space-time adaptive numerical methods for geophysical applications.
Castro, C E; Käser, M; Toro, E F
2009-11-28
In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.
NASA Astrophysics Data System (ADS)
De Basabe, Jonás D.; Sen, Mrinal K.
2010-04-01
We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively.
On Some Numerical Dissipation Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Radespiel, R.; Turkel, E.
1998-01-01
Several schemes for introducing an artificial dissipation into a central difference approximation to the Euler and Navier Stokes equations are considered. The focus of the paper is on the convective upwind and split pressure (CUSP) scheme, which is designed to support single interior point discrete shock waves. This scheme is analyzed and compared in detail with scalar dissipation and matrix dissipation (MATD) schemes. Resolution capability is determined by solving subsonic, transonic, and hypersonic flow problems. A finite-volume discretization and a multistage time-stepping scheme with multigrid are used to compute solutions to the flow equations. Numerical solutions are also compared with either theoretical solutions or experimental data. For transonic airfoil flows the best accuracy on coarse meshes for aerodynamic coefficients is obtained with a simple MATD scheme. The coarse-grid accuracy for the original CUSP scheme is improved by modifying the limiter function used with the scheme, giving comparable accuracy to that obtained with the MATD scheme. The modifications reduce the background dissipation and provide control over the regions where the scheme can become first order.
Multiple-Time Step Ab Initio Molecular Dynamics Based on Two-Electron Integral Screening.
Fatehi, Shervin; Steele, Ryan P
2015-03-10
A multiple-timestep ab initio molecular dynamics scheme based on varying the two-electron integral screening method used in Hartree-Fock or density functional theory calculations is presented. Although screening is motivated by numerical considerations, it is also related to separations in the length- and timescales characterizing forces in a molecular system: Loose thresholds are sufficient to describe fast motions over short distances, while tight thresholds may be employed for larger length scales and longer times, leading to a practical acceleration of ab initio molecular dynamics simulations. Standard screening approaches can lead, however, to significant discontinuities in (and inconsistencies between) the energy and gradient when the screening threshold is loose, making them inappropriate for use in dynamics. To remedy this problem, a consistent window-screening method that smooths these discontinuities is devised. Further algorithmic improvements reuse electronic-structure information within the dynamics step and enhance efficiency relative to a naı̈ve multiple-timestepping protocol. The resulting scheme is shown to realize meaningful reductions in the cost of Hartree-Fock and B3LYP simulations of a moderately large system, the protonated sarcosine/glycine dipeptide embedded in a 19-water cluster.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Corrales, Louis R.; Devanathan, Ram
2006-09-01
Non-equilibrium molecular dynamics simulation trajectories must in principle conserve energy along the entire path. Processes exist in high-energy primary knock-on atom cascades that can affect the energy conservation, specifically during the ballistic phase where collisions bring atoms into very close proximities. The solution, in general, is to reduce the time step size of the simulation. This work explores the effects of variable time step algorithms and the effects of specifying a maximum displacement. The period of the ballistic phase can be well characterized by methods developed in this work to monitor the kinetic energy dissipation during a high-energy cascade.
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Padgett, Jill M. A.; Ilie, Silvana
2016-03-01
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.
BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics
NASA Astrophysics Data System (ADS)
Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.
2010-12-01
of both climate and ecosystems must be done at coarse grid resolutions; smaller domains require higher resolution for the simulation of natural resource processes at the landscape scale and that of on-the-ground management practices. Via a combined multi-agency and private conservation effort we have implemented a Nested Scale Experiment (NeScE) that ranges from 1/2 degree resolution (global, ca. 50 km) to ca. 8km (North America) and 800 m (conterminous U.S.). Our first DGVM, MC1, has been implemented at all 3 scales. We are just beginning to implement BIOMAP into NeScE, with its unique features, and daily time step, as a counterpoint to MC1. We believe it will be more accurate at all resolutions providing better simulations of vegetation distribution, carbon balance, runoff, fire regimes and drought impacts.
Erni, Daniel; Liebig, Thorsten; Rennings, Andreas; Koster, Norbert H L; Fröhlich, Jürg
2011-01-01
We propose an adaptive RF antenna system for the excitation (and manipulation) of the fundamental circular waveguide mode (TE(11)) in the context of high-field (7T) traveling-wave magnetic resonance imaging (MRI). The system consists of
NASA Technical Reports Server (NTRS)
Glocer, A.; Toth, G.; Ma, Y.; Gombosi, T.; Zhang, J.-C.; Kistler, L. M.
2009-01-01
The magnetosphere contains a significant amount of ionospheric O+, particularly during geomagnetically active times. The presence of ionospheric plasma in the magnetosphere has a notable impact on magnetospheric composition and processes. We present a new multifluid MHD version of the Block-Adaptive-Tree Solar wind Roe-type Upwind Scheme model of the magnetosphere to track the fate and consequences of ionospheric outflow. The multifluid MHD equations are presented as are the novel techniques for overcoming the formidable challenges associated with solving them. Our new model is then applied to the May 4, 1998 and March 31, 2001 geomagnetic storms. The results are juxtaposed with traditional single-fluid MHD and multispecies MHD simulations from a previous study, thereby allowing us to assess the benefits of using a more complex model with additional physics. We find that our multifluid MHD model (with outflow) gives comparable results to the multispecies MHD model (with outflow), including a more strongly negative Dst, reduced CPCP, and a drastically improved magnetic field at geosynchronous orbit, as compared to single-fluid MHD with no outflow. Significant differences in composition and magnetic field are found between the multispecies and multifluid approach further away from the Earth. We further demonstrate the ability to explore pressure and bulk velocity differences between H+ and O+, which is not possible when utilizing the other techniques considered
Design of optimally smoothing multi-stage schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Van Leer, Bram; Tai, Chang-Hsien; Powell, Kenneth G.
1989-01-01
In this paper, a method is developed for designing multi-stage schemes that give optimal damping of high-frequencies for a given spatial-differencing operator. The objective of the method is to design schemes that combine well with multi-grid acceleration. The schemes are tested on a nonlinear scalar equation, and compared to Runge-Kutta schemes with the maximum stable time-step. The optimally smoothing schemes perform better than the Runge-Kutta schemes, even on a single grid. The analysis is extended to the Euler equations in one space-dimension by use of 'characteristic time-stepping', which preconditions the equations, removing stiffness due to variations among characteristic speeds. Convergence rates independent of the number of cells in the finest grid are achieved for transonic flow with and without a shock. Characteristic time-stepping is shown to be preferable to local time-stepping, although use of the optimally damping schemes appears to enhance the performance of local time-stepping. The extension of the analysis to the two-dimensional Euler equations is hampered by the lack of a model for characteristic time-stepping in two dimensions. Some results for local time-stepping are presented.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering
Park, Sung-Yun; Cho, Jihyun; Lee, Kyuseok; Yoon, Euisik
2015-12-01
We report a pulse width modulation (PWM) buck converter that is able to achieve a power conversion efficiency (PCE) of > 80% in light loads 100 μA) for implantable biomedical systems. In order to achieve a high PCE for the given light loads, the buck converter adaptively reconfigures the size of power PMOS and NMOS transistors and their gate drivers in accordance with load currents, while operating at a fixed frequency of 1 MHz. The buck converter employs the analog-digital hybrid control scheme for coarse/fine adjustment of power transistors. The coarse digital control generates an approximate duty cycle necessary for driving a given load and selects an appropriate width of power transistors to minimize redundant power dissipation. The fine analog control provides the final tuning of the duty cycle to compensate for the error from the coarse digital control. The mode switching between the analog and digital controls is accomplished by a mode arbiter which estimates the average of duty cycles for the given load condition from limit cycle oscillations (LCO) induced by coarse adjustment. The fabricated buck converter achieved a peak efficiency of 86.3% at 1.4 mA and > 80% efficiency for a wide range of load conditions from 45 μA to 4.1 mA, while generating 1 V output from 2.5-3.3 V supply. The converter occupies 0.375 mm(2) in 0.18 μm CMOS processes and requires two external components: 1.2 μF capacitor and 6.8 μH inductor.
Datta, Dipayan; Gauss, Jürgen
2015-07-07
We report analytical calculations of isotropic hyperfine-coupling constants in radicals using a spin-adapted open-shell coupled-cluster theory, namely, the unitary group based combinatoric open-shell coupled-cluster (COSCC) approach within the singles and doubles approximation. A scheme for the evaluation of the one-particle spin-density matrix required in these calculations is outlined within the spin-free formulation of the COSCC approach. In this scheme, the one-particle spin-density matrix for an open-shell state with spin S and MS = + S is expressed in terms of the one- and two-particle spin-free (charge) density matrices obtained from the Lagrangian formulation that is used for calculating the analytic first derivatives of the energy. Benchmark calculations are presented for NO, NCO, CH2CN, and two conjugated π-radicals, viz., allyl and 1-pyrrolyl in order to demonstrate the performance of the proposed scheme.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.
NASA Technical Reports Server (NTRS)
Marek, C. John; Molnar, Melissa
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.
Nam, Kwangho
2014-10-14
Development of multiscale ab initio quantum mechanical and molecular mechanical (AI-QM/MM) method for periodic boundary molecular dynamics (MD) simulations and their acceleration by multiple time step approach are described. The developed method achieves accuracy and efficiency by integrating the AI-QM/MM level of theory and the previously developed semiempirical (SE) QM/MM-Ewald sum method [J. Chem. Theory Comput. 2005, 1, 2] extended to the smooth particle-mesh Ewald (PME) summation method. In the developed methods, the total energy of the simulated system is evaluated at the SE-QM/MM-PME level of theory to include long-range QM/MM electrostatic interactions, which is then corrected on the fly using the AI-QM/MM level of theory within the real space cutoff. The resulting energy expression enables decomposition of total forces applied to each atom into forces determined at the low-level SE-QM/MM method and correction forces at the AI-QM/MM level, to integrate the system using the reversible reference system propagator algorithm. The resulting method achieves a substantial speed-up of the entire calculation by minimizing the number of time-consuming energy and gradient evaluations at the AI-QM/MM level. Test calculations show that the developed multiple time step AI-QM/MM method yields MD trajectories and potential of mean force profiles comparable to single time step QM/MM results. The developed method, together with message passing interface (MPI) parallelization, accelerates the present AI-QM/MM MD simulations about 30-fold relative to the speed of single-core AI-QM/MM simulations for the molecular systems tested in the present work, making the method less than one order slower than the SE-QM/MM methods under periodic boundary conditions.
Nonlinear wave propagation using three different finite difference schemes (category 2 application)
NASA Technical Reports Server (NTRS)
Pope, D. Stuart; Hardin, J. C.
1995-01-01
Three common finite difference schemes are used to examine the computation of one-dimensional nonlinear wave propagation. The schemes are studied for their responses to numerical parameters such as time step selection, boundary condition implementation, and discretization of governing equations. The performance of the schemes is compared and various numerical phenomena peculiar to each is discussed.
Twin Signature Schemes, Revisited
NASA Astrophysics Data System (ADS)
Schäge, Sven
In this paper, we revisit the twin signature scheme by Naccache, Pointcheval and Stern from CCS 2001 that is secure under the Strong RSA (SRSA) assumption and improve its efficiency in several ways. First, we present a new twin signature scheme that is based on the Strong Diffie-Hellman (SDH) assumption in bilinear groups and allows for very short signatures and key material. A big advantage of this scheme is that, in contrast to the original scheme, it does not require a computationally expensive function for mapping messages to primes. We prove this new scheme secure under adaptive chosen message attacks. Second, we present a modification that allows to significantly increase efficiency when signing long messages. This construction uses collision-resistant hash functions as its basis. As a result, our improvements make the signature length independent of the message size. Our construction deviates from the standard hash-and-sign approach in which the hash value of the message is signed in place of the message itself. We show that in the case of twin signatures, one can exploit the properties of the hash function as an integral part of the signature scheme. This improvement can be applied to both the SRSA based and SDH based twin signature scheme.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
NASA Astrophysics Data System (ADS)
Klaij, C. M.; van der Vegt, J. J. W.; van der Ven, H.
2006-12-01
The space-time discontinuous Galerkin discretization of the compressible Navier-Stokes equations results in a non-linear system of algebraic equations, which we solve with pseudo-time stepping methods. We show that explicit Runge-Kutta methods developed for the Euler equations suffer from a severe stability constraint linked to the viscous part of the equations and propose an alternative to relieve this constraint while preserving locality. To evaluate its effectiveness, we compare with an implicit-explicit Runge-Kutta method which does not suffer from the viscous stability constraint. We analyze the stability of the methods and illustrate their performance by computing the flow around a 2D airfoil and a 3D delta wing at low and moderate Reynolds numbers.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2004-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3
2012-06-01
slopes, the measured offset of the spot centers have to be divided by the focal length of the lenslets. In this study, the slope error measured by the...moves the mirror surface in one direction from a flat reference producing concave shapes. In order to allow bidirectional control, the mirror is...Adaptive Optics (AO) testbed. In most custom-built adaptive optics control problems, spatial resolution and available stroke of the deformable mirror
Ruiz-Garbajosa, Patricia; Bonten, Marc J. M.; Robinson, D. Ashley; Top, Janetta; Nallapareddy, Sreedhar R.; Torres, Carmen; Coque, Teresa M.; Cantón, Rafael; Baquero, Fernando; Murray, Barbara E.; del Campo, Rosa; Willems, Rob J. L.
2006-01-01
A multilocus sequence typing (MLST) scheme based on seven housekeeping genes was used to investigate the epidemiology and population structure of Enterococcus faecalis. MLST of 110 isolates from different sources and geographic locations revealed 55 different sequence types that grouped into four major clonal complexes (CC2, CC9, CC10, and CC21) by use of eBURST. Two of these clonal complexes, CC2 and CC9, are particularly fit in the hospital environment, as CC2 includes the previously described BVE clonal complex identified by an alternative MLST scheme and CC9 includes exclusively isolates from hospitalized patients. Identical alleles were found in genetically diverse isolates with no linkage disequilibrium, while the different MLST loci gave incongruent phylogenetic trees. This demonstrates that recombination is an important mechanism driving genetic variation in E. faecalis and suggests an epidemic population structure for E. faecalis. Our novel MLST scheme provides an excellent tool for investigating local and short-term epidemiology as well as global epidemiology, population structure, and genetic evolution of E. faecalis. PMID:16757624
Ruiz-Garbajosa, Patricia; Bonten, Marc J M; Robinson, D Ashley; Top, Janetta; Nallapareddy, Sreedhar R; Torres, Carmen; Coque, Teresa M; Cantón, Rafael; Baquero, Fernando; Murray, Barbara E; del Campo, Rosa; Willems, Rob J L
2006-06-01
A multilocus sequence typing (MLST) scheme based on seven housekeeping genes was used to investigate the epidemiology and population structure of Enterococcus faecalis. MLST of 110 isolates from different sources and geographic locations revealed 55 different sequence types that grouped into four major clonal complexes (CC2, CC9, CC10, and CC21) by use of eBURST. Two of these clonal complexes, CC2 and CC9, are particularly fit in the hospital environment, as CC2 includes the previously described BVE clonal complex identified by an alternative MLST scheme and CC9 includes exclusively isolates from hospitalized patients. Identical alleles were found in genetically diverse isolates with no linkage disequilibrium, while the different MLST loci gave incongruent phylogenetic trees. This demonstrates that recombination is an important mechanism driving genetic variation in E. faecalis and suggests an epidemic population structure for E. faecalis. Our novel MLST scheme provides an excellent tool for investigating local and short-term epidemiology as well as global epidemiology, population structure, and genetic evolution of E. faecalis.
Landis, Wayne G; Markiewicz, April J; Ayre, Kim K; Johns, Annie F; Harris, Meagan J; Stinson, Jonah M; Summers, Heather M
2017-01-01
Adaptive management has been presented as a method for the remediation, restoration, and protection of ecological systems. Recent reviews have found that the implementation of adaptive management has been unsuccessful in many instances. We present a modification of the model first formulated by Wyant and colleagues that puts ecological risk assessment into a central role in the adaptive management process. This construction has 3 overarching segments. Public engagement and governance determine the goals of society by identifying endpoints and specifying constraints such as costs. The research, engineering, risk assessment, and management section contains the decision loop estimating risk, evaluating options, specifying the monitoring program, and incorporating the data to re-evaluate risk. The 3rd component is the recognition that risk and public engagement can be altered by various externalities such as climate change, economics, technological developments, and population growth. We use the South River, Virginia, USA, study area and our previous research to illustrate each of these components. In our example, we use the Bayesian Network Relative Risk Model to estimate risks, evaluate remediation options, and provide lists of monitoring priorities. The research, engineering, risk assessment, and management loop also provides a structure in which data and the records of what worked and what did not, the learning process, can be stored. The learning process is a central part of adaptive management. We conclude that risk assessment can and should become an integral part of the adaptive management process. Integr Environ Assess Manag 2017;13:115-126. © 2016 SETAC.
Fukuda, Ryoichi Ehara, Masahiro
2014-10-21
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Datta, Dipayan Gauss, Jürgen
2015-07-07
We report analytical calculations of isotropic hyperfine-coupling constants in radicals using a spin-adapted open-shell coupled-cluster theory, namely, the unitary group based combinatoric open-shell coupled-cluster (COSCC) approach within the singles and doubles approximation. A scheme for the evaluation of the one-particle spin-density matrix required in these calculations is outlined within the spin-free formulation of the COSCC approach. In this scheme, the one-particle spin-density matrix for an open-shell state with spin S and M{sub S} = + S is expressed in terms of the one- and two-particle spin-free (charge) density matrices obtained from the Lagrangian formulation that is used for calculating the analytic first derivatives of the energy. Benchmark calculations are presented for NO, NCO, CH{sub 2}CN, and two conjugated π-radicals, viz., allyl and 1-pyrrolyl in order to demonstrate the performance of the proposed scheme.
Poonam Khanijo Ahluwalia; Nema, Arvind K
2011-07-01
Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).
ERIC Educational Resources Information Center
Martin, Nancy
Presented is a technical report concerning the use of a mathematical model describing certain aspects of the duplication and selection processes in natural genetic adaptation. This reproductive plan/model occurs in artificial genetics (the use of ideas from genetics to develop general problem solving techniques for computers). The reproductive…
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across
Zonal multigrid solution of compressible flow problems on unstructured and adaptive meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1989-01-01
The simultaneous use of adaptive meshing techniques with a multigrid strategy for solving the 2-D Euler equations in the context of unstructured meshes is studied. To obtain optimal efficiency, methods capable of computing locally improved solutions without recourse to global recalculations are pursued. A method for locally refining an existing unstructured mesh, without regenerating a new global mesh is employed, and the domain is automatically partitioned into refined and unrefined regions. Two multigrid strategies are developed. In the first, time-stepping is performed on a global fine mesh covering the entire domain, and convergence acceleration is achieved through the use of zonal coarse grid accelerator meshes, which lie under the adaptively refined regions of the global fine mesh. Both schemes are shown to produce similar convergence rates to each other, and also with respect to a previously developed global multigrid algorithm, which performs time-stepping throughout the entire domain, on each mesh level. However, the present schemes exhibit higher computational efficiency due to the smaller number of operations on each level.
Lefrancois, Daniel; Wormit, Michael; Dreuw, Andreas
2015-09-28
For the investigation of molecular systems with electronic ground states exhibiting multi-reference character, a spin-flip (SF) version of the algebraic diagrammatic construction (ADC) scheme for the polarization propagator up to third order perturbation theory (SF-ADC(3)) is derived via the intermediate state representation and implemented into our existing ADC computer program adcman. The accuracy of these new SF-ADC(n) approaches is tested on typical situations, in which the ground state acquires multi-reference character, like bond breaking of H2 and HF, the torsional motion of ethylene, and the excited states of rectangular and square-planar cyclobutadiene. Overall, the results of SF-ADC(n) reveal an accurate description of these systems in comparison with standard multi-reference methods. Thus, the spin-flip versions of ADC are easy-to-use methods for the calculation of "few-reference" systems, which possess a stable single-reference triplet ground state.
NASA Astrophysics Data System (ADS)
Lefrancois, Daniel; Wormit, Michael; Dreuw, Andreas
2015-09-01
For the investigation of molecular systems with electronic ground states exhibiting multi-reference character, a spin-flip (SF) version of the algebraic diagrammatic construction (ADC) scheme for the polarization propagator up to third order perturbation theory (SF-ADC(3)) is derived via the intermediate state representation and implemented into our existing ADC computer program adcman. The accuracy of these new SF-ADC(n) approaches is tested on typical situations, in which the ground state acquires multi-reference character, like bond breaking of H2 and HF, the torsional motion of ethylene, and the excited states of rectangular and square-planar cyclobutadiene. Overall, the results of SF-ADC(n) reveal an accurate description of these systems in comparison with standard multi-reference methods. Thus, the spin-flip versions of ADC are easy-to-use methods for the calculation of "few-reference" systems, which possess a stable single-reference triplet ground state.
Multi-resolution analysis for ENO schemes
NASA Technical Reports Server (NTRS)
Harten, Ami
1991-01-01
Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.
Froehle, Bradley Persson, Per-Olof
2014-09-01
We present a high-order accurate scheme for coupled fluid–structure interaction problems. The fluid is discretized using a discontinuous Galerkin method on unstructured tetrahedral meshes, and the structure uses a high-order volumetric continuous Galerkin finite element method. Standard radial basis functions are used for the mesh deformation. The time integration is performed using a partitioned approach based on implicit–explicit Runge–Kutta methods. The resulting scheme fully decouples the implicit solution procedures for the fluid and the solid parts, which we perform using two separate efficient parallel solvers. We demonstrate up to fifth order accuracy in time on a non-trivial test problem, on which we also show that additional subiterations are not required. We solve a benchmark problem of a cantilever beam in a shedding flow, and show good agreement with other results in the literature. Finally, we solve for the flow around a thin membrane at a high angle of attack in both 2D and 3D, and compare with the results obtained with a rigid plate.
NASA Astrophysics Data System (ADS)
Zou, LiYan; Li, Miao; Guo, ChenChen; Wang, YongJia; Li, QingFeng; Liu, Ling
2016-12-01
By considering different values of the time-step for the potential updates in the ultra-relativistic quantum molecular dynamics (UrQMD) model, we examine its influence on observables, such as the yield and collective flow of nucleons and pions from heavyion collisions around 1 GeV/nucleon. It is found that these observables are affected to some extent by the choice of the time-step, and the impact of the time-step on the pion-related observables is more visible than that on the nucleon-related ones. However, its effect on the π -/ π + yield ratio and elliptic flow difference between neutrons and protons, which have been taken as sensitive observables for probing the density-dependent nuclear symmetry energy at high densities, is fairly weak.
A fully implicit scheme for the barotropic primitive equations
NASA Technical Reports Server (NTRS)
Cohn, S. E.; Dee, D.; Marchesin, D.; Isaacson, E.; Zwas, G.
1985-01-01
An efficient implicit finite-difference method is developed and tested for a global barotropic model. The scheme requires, at each time step, the solution of only one-dimensional block-tridiagonal linear systems. This additional computation is offset by the use of a time step chosen independently of the mesh spacing. The method is second-order accurate in time and fourth-order accurate in space. Present experience indicates that this implicit method is practical for numerical simulation on fine meshes.
Lefrancois, Daniel; Wormit, Michael; Dreuw, Andreas
2015-09-28
For the investigation of molecular systems with electronic ground states exhibiting multi-reference character, a spin-flip (SF) version of the algebraic diagrammatic construction (ADC) scheme for the polarization propagator up to third order perturbation theory (SF-ADC(3)) is derived via the intermediate state representation and implemented into our existing ADC computer program adcman. The accuracy of these new SF-ADC(n) approaches is tested on typical situations, in which the ground state acquires multi-reference character, like bond breaking of H{sub 2} and HF, the torsional motion of ethylene, and the excited states of rectangular and square-planar cyclobutadiene. Overall, the results of SF-ADC(n) reveal an accurate description of these systems in comparison with standard multi-reference methods. Thus, the spin-flip versions of ADC are easy-to-use methods for the calculation of “few-reference” systems, which possess a stable single-reference triplet ground state.
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and
An adaptive multigrid model for hurricane track prediction
NASA Technical Reports Server (NTRS)
Fulton, Scott R.
1993-01-01
This paper describes a simple numerical model for hurricane track prediction which uses a multigrid method to adapt the model resolution as the vortex moves. The model is based on the modified barotropic vorticity equation, discretized in space by conservative finite differences and in time by a Runge-Kutta scheme. A multigrid method is used to solve an elliptic problem for the streamfunction at each time step. Nonuniform resolution is obtained by superimposing uniform grids of different spatial extent; these grids move with the vortex as it moves. Preliminary numerical results indicate that the local mesh refinement allows accurate prediction of the hurricane track with substantially less computer time than required on a single uniform grid.
An Energy Decaying Scheme for Nonlinear Dynamics of Shells
NASA Technical Reports Server (NTRS)
Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.
Accuracy of schemes with nonuniform meshes for compressible fluid flows
NASA Technical Reports Server (NTRS)
Turkel, E.
1985-01-01
The accuracy of the space discretization for time-dependent problems when a nonuniform mesh is used is considered. Many schemes reduce to first-order accuracy while a popular finite volume scheme is even inconsistent for general grids. This accuracy is based on physical variables. However, when accuracy is measured in computational variables then second-order accuracy can be obtained. This is meaningful only if the mesh accurately reflects the properties of the solution. In addition, the stability properties of some improved accurate schemes are analyzed and it can be shown that they also allow for larger time steps when Runge-Kutta type methods are used to advance in time.
A second-order characteristic line scheme for solving a juvenile-adult model of amphibians.
Deng, Keng; Wang, Yi
2015-01-01
In this paper, we develop a second-order characteristic line scheme for a nonlinear hierarchical juvenile-adult population model of amphibians. The idea of the scheme is not to follow the characteristics from the initial data, but for each time step to find the origins of the grid nodes at the previous time level. Numerical examples are presented to demonstrate the accuracy of the scheme and its capability to handle solutions with singularity.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.
2012-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
Building a better leapfrog. [an algorithm for ensuring time symmetry in any integration scheme
NASA Technical Reports Server (NTRS)
Hut, Piet; Makino, Jun; Mcmillan, Steve
1995-01-01
In stellar dynamical computer simulations, as well as other types of simulations using particles, time step size is often held constant in order to guarantee a high degree of energy conservation. In many applications, allowing the time step size to change in time can offer a great saving in computational cost, but variable-size time steps usually imply a substantial degradation in energy conservation. We present a meta-algorithm' for choosing time steps in such a way as to guarantee time symmetry in any integration scheme, thus allowing vastly improved energy conservation for orbital calculations with variable time steps. We apply the algorithm to the familiar leapfrog scheme, and generalize to higher order integration schemes, showing how the stability properties of the fixed-step leapfrog scheme can be extended to higher order, variable-step integrators such as the Hermite method. We illustrate the remarkable properties of these time-symmetric integrators for the case of a highly eccentric elliptical Kepler orbit and discuss applications to more complex problems.
NASA Astrophysics Data System (ADS)
Halliday, I.; Xu, X.; Burgin, K.
2017-02-01
An extended Benzi-Dellar lattice Boltzmann equation scheme [R. Benzi, S. Succi, and M. Vergassola, Europhys. Lett. 13, 727 (1990), 10.1209/0295-5075/13/8/010; R. Benzi, S. Succi, and M. Vergassola, Phys. Rep. 222, 145 (1992), 10.1016/0370-1573(92)90090-M; P. J. Dellar, Phys. Rev. E 65, 036309 (2002), 10.1103/PhysRevE.65.036309] is developed and applied to the problem of confirming, at low Re and drop fluid concentration, c , the variation of effective shear viscosity, ηeff=η1[1 +f (η1,η2) c ] , with respect to c for a sheared, two-dimensional, initially crystalline emulsion [here η1 (η2) is the fluid (drop fluid) shear viscosity]. Data obtained with our enhanced multicomponent lattice Boltzmann method, using average shear stress and hydrodynamic dissipation, agree well once appropriate corrections to Landau's volume average shear stress [L. Landau and E. M. Lifshitz, Fluid Mechanics, 6th ed. (Pergamon, London, 1966)] are applied. Simulation results also confirm the expected form for f (ηi,η2) , and they provide a reasonable estimate of its parameters. Most significantly, perhaps, the generality of our data supports the validity of Taylor's disputed simplification [G. I. Taylor, Proc. R. Soc. London, Ser. A 138, 133 (1932), 10.1098/rspa.1932.0175] to reduce the effect of one hydrodynamic boundary condition (on the continuity of the normal contraction of stress) to an assumption that interfacial tension is sufficiently strong to maintain a spherical drop shape.
NASA Astrophysics Data System (ADS)
Wen, Baole; Chini, Gregory P.; Kerswell, Rich R.; Doering, Charles R.
2015-10-01
An alternative computational procedure for numerically solving a class of variational problems arising from rigorous upper-bound analysis of forced-dissipative infinite-dimensional nonlinear dynamical systems, including the Navier-Stokes and Oberbeck-Boussinesq equations, is analyzed and applied to Rayleigh-Bénard convection. A proof that the only steady state to which this numerical algorithm can converge is the required global optimal of the relevant variational problem is given for three canonical flow configurations. In contrast with most other numerical schemes for computing the optimal bounds on transported quantities (e.g., heat or momentum) within the "background field" variational framework, which employ variants of Newton's method and hence require very accurate initial iterates, the new computational method is easy to implement and, crucially, does not require numerical continuation. The algorithm is used to determine the optimal background-method bound on the heat transport enhancement factor, i.e., the Nusselt number (Nu), as a function of the Rayleigh number (Ra), Prandtl number (Pr), and domain aspect ratio L in two-dimensional Rayleigh-Bénard convection between stress-free isothermal boundaries (Rayleigh's original 1916 model of convection). The result of the computation is significant because analyses, laboratory experiments, and numerical simulations have suggested a range of exponents α and β in the presumed Nu˜PrαRaβ scaling relation. The computations clearly show that for Ra≤1010 at fixed L =2 √{2 },Nu≤0.106 Pr0Ra5/12 , which indicates that molecular transport cannot generally be neglected in the "ultimate" high-Ra regime.
Wen, Baole; Chini, Gregory P; Kerswell, Rich R; Doering, Charles R
2015-10-01
An alternative computational procedure for numerically solving a class of variational problems arising from rigorous upper-bound analysis of forced-dissipative infinite-dimensional nonlinear dynamical systems, including the Navier-Stokes and Oberbeck-Boussinesq equations, is analyzed and applied to Rayleigh-Bénard convection. A proof that the only steady state to which this numerical algorithm can converge is the required global optimal of the relevant variational problem is given for three canonical flow configurations. In contrast with most other numerical schemes for computing the optimal bounds on transported quantities (e.g., heat or momentum) within the "background field" variational framework, which employ variants of Newton's method and hence require very accurate initial iterates, the new computational method is easy to implement and, crucially, does not require numerical continuation. The algorithm is used to determine the optimal background-method bound on the heat transport enhancement factor, i.e., the Nusselt number (Nu), as a function of the Rayleigh number (Ra), Prandtl number (Pr), and domain aspect ratio L in two-dimensional Rayleigh-Bénard convection between stress-free isothermal boundaries (Rayleigh's original 1916 model of convection). The result of the computation is significant because analyses, laboratory experiments, and numerical simulations have suggested a range of exponents α and β in the presumed Nu∼Pr(α)Ra(β) scaling relation. The computations clearly show that for Ra≤10(10) at fixed L=2√[2],Nu≤0.106Pr(0)Ra(5/12), which indicates that molecular transport cannot generally be neglected in the "ultimate" high-Ra regime.
Compact integration factor methods for complex domains and adaptive mesh refinement.
Liu, Xinfeng; Nie, Qing
2010-08-10
Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.
2009-01-01
Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m^{2}) and longwave cloud forcing (~5 W/m^{2}) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfullymore » satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
The GEMPAK Barnes objective analysis scheme
NASA Technical Reports Server (NTRS)
Koch, S. E.; Desjardins, M.; Kocin, P. J.
1981-01-01
GEMPAK, an interactive computer software system developed for the purpose of assimilating, analyzing, and displaying various conventional and satellite meteorological data types is discussed. The objective map analysis scheme possesses certain characteristics that allowed it to be adapted to meet the analysis needs GEMPAK. Those characteristics and the specific adaptation of the scheme to GEMPAK are described. A step-by-step guide for using the GEMPAK Barnes scheme on an interactive computer (in real time) to analyze various types of meteorological datasets is also presented.
Stability analysis of intermediate boundary conditions in approximate factorization schemes
NASA Technical Reports Server (NTRS)
South, J. C., Jr.; Hafez, M. M.; Gottlieb, D.
1986-01-01
The paper discusses the role of the intermediate boundary condition in the AF2 scheme used by Holst for simulation of the transonic full potential equation. It is shown that the treatment suggested by Holst led to a restriction on the time step and ways to overcome this restriction are suggested. The discussion is based on the theory developed by Gustafsson, Kreiss, and Sundstrom and also on the von Neumann method.
Multi-resolution analysis for ENO schemes
NASA Technical Reports Server (NTRS)
Harten, Ami
1993-01-01
Given a function u(x) which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. We apply this multi-resolution analysis to Essentially Non-oscillatory Schemes (ENO) schemes in order to reduce the number of numerical flux computations which is needed in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. We present an efficient algorithm for implementing this program in the one-dimensional case; this algorithm can be extended to the multi-dimensional case with cartesian grids.
On Tenth Order Central Spatial Schemes
Sjogreen, B; Yee, H C
2007-05-14
This paper explores the performance of the tenth-order central spatial scheme and derives the accompanying energy-norm stable summation-by-parts (SBP) boundary operators. The objective is to employ the resulting tenth-order spatial differencing with the stable SBP boundary operators as a base scheme in the framework of adaptive numerical dissipation control in high order multistep filter schemes of Yee et al. (1999), Yee and Sj{umlt o}green (2002, 2005, 2006, 2007), and Sj{umlt o}green and Yee (2004). These schemes were designed for multiscale turbulence flows including strong shock waves and combustion.
2014-11-01
content (ie: low- pass response) 1) compare damping character of Artificial Dissipation and Filtering 2) formulate filter as an equivalent...Artificial Dissipation scheme - consequence of filter damping for stiff problems 3) insight on achieving “ideal” low- pass response for general...require very high order for low- pass response – overly dissipative for small time-steps • Implicit filters can be efficiently designed for low- pass
NASA Astrophysics Data System (ADS)
Rosam, J.; Jimack, P. K.; Mullis, A.
2007-08-01
A fully implicit numerical method based upon adaptively refined meshes for the simulation of binary alloy solidification in 2D is presented. In addition we combine a second-order fully implicit time discretisation scheme with variable step size control to obtain an adaptive time and space discretisation method. The superiority of this method, compared to widely used fully explicit methods, with respect to CPU time and accuracy, is shown. Due to the high nonlinearity of the governing equations a robust and fast solver for systems of nonlinear algebraic equations is needed to solve the intermediate approximations per time step. We use a nonlinear multigrid solver which shows almost h-independent convergence behaviour.
Adaptation of adaptive optics systems.
NASA Astrophysics Data System (ADS)
Xin, Yu; Zhao, Dazun; Li, Chen
1997-10-01
In the paper, a concept of an adaptation of adaptive optical system (AAOS) is proposed. The AAOS has certain real time optimization ability against the variation of the brightness of detected objects m, atmospheric coherence length rO and atmospheric time constant τ by means of changing subaperture number and diameter, dynamic range, and system's temporal response. The necessity of AAOS using a Hartmann-Shack wavefront sensor and some technical approaches are discussed. Scheme and simulation of an AAOS with variable subaperture ability by use of both hardware and software are presented as an example of the system.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Adaptable DC offset correction
NASA Technical Reports Server (NTRS)
Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)
2009-01-01
Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.
NASA Astrophysics Data System (ADS)
Carrander, Claes; Mousavi, Seyed Ali; Engdahl, G. öran
2017-02-01
In many transformer applications, it is necessary to have a core magnetization model that takes into account both magnetic and electrical effects. This becomes particularly important in three-phase transformers, where the zero-sequence impedance is generally high, and therefore affects the magnetization very strongly. In this paper, we demonstrate a time-step topological simulation method that uses a lumped-element approach to accurately model both the electrical and magnetic circuits. The simulation method is independent of the used hysteresis model. In this paper, a hysteresis model based on the first-order reversal-curve has been used.
A Split-Step Scheme for the Incompressible Navier-Stokes
Henshaw, W; Petersson, N A
2001-06-12
We describe a split-step finite-difference scheme for solving the incompressible Navier-Stokes equations on composite overlapping grids. The split-step approach decouples the solution of the velocity variables from the solution of the pressure. The scheme is based on the velocity-pressure formulation and uses a method of lines approach so that a variety of implicit or explicit time stepping schemes can be used once the equations have been discretized in space. We have implemented both second-order and fourth-order accurate spatial approximations that can be used with implicit or explicit time stepping methods. We describe how to choose appropriate boundary conditions to make the scheme accurate and stable. A divergence damping term is added to the pressure equation to keep the numerical dilatation small. Several numerical examples are presented.
Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1997-01-01
A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.
Generalized formulation of a class of explicit and implicit TVD schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.
1985-01-01
A one parameter family of second order explicit and implicit total variation diminishing (TVD) schemes is reformulated so that a simpler and wider group of limiters is included. The resulting scheme can be viewed as a symmetrical algorithm with a variety of numerical dissipation terms that are designed for weak solutions of hyperbolic problems. This is a generalization of Roe and Davis's recent works to a wider class of symmetric schemes other than Lax-Wendroff. The main properties of the present class of schemes are that they can be implicit, and when steady state calculations are sought, the numerical solution is independent of the time step.
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Han, Daozhi
2017-02-01
In this paper, we develop a series of linear, unconditionally energy stable numerical schemes for solving the classical phase field crystal model. The temporal discretizations are based on the first order Euler method, the second order backward differentiation formulas (BDF2) and the second order Crank-Nicolson method, respectively. The schemes lead to linear elliptic equations to be solved at each time step, and the induced linear systems are symmetric positive definite. We prove that all three schemes are unconditionally energy stable rigorously. Various classical numerical experiments in 2D and 3D are performed to validate the accuracy and efficiency of the proposed schemes.
Nonlinear Secret Image Sharing Scheme
Shin, Sang-Ho; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2m⌉ bit-per-pixel (bpp), respectively. PMID:25140334
Nonlinear secret image sharing scheme.
Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.
Parallel level-set methods on adaptive tree-based grids
NASA Astrophysics Data System (ADS)
Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic
2016-10-01
We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.
NASA Astrophysics Data System (ADS)
Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim
2017-02-01
A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail
signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.
Wan, Hui; Zhang, Kai; Rasch, Philip J.; ...
2017-02-03
A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associatedmore » with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.« less
Stability of explicit advection schemes. The balance point location rule
NASA Astrophysics Data System (ADS)
Leonard, B. P.
2002-02-01
This paper introduces the balance point location rule, providing specific necessary and sufficient conditions for constructing unconditionally stable explicit advection schemes, in both semi-Lagrangian and flux-form Eulerian formulations. The rule determines how the spatial stencil is placed on the computational grid. It requires the balance point (the center of the stencil in index space) to be located in the same patch as the departure point for semi-Lagrangian schemes or the same cell as the sweep point for Eulerian schemes. Centering the stencil in this way guarantees stability, regardless of the size of the time step. In contrast, the original Courant-Friedrichs-Lewy (CFL) condition requiring the stencil merely to include the departure (sweep) point, although necessary, is not sufficient for guaranteeing stability. The CFL condition is of limited practical value, whereas the balance point location rule always gives precise and easily implemented prescriptions for constructing stable algorithms. The rule is also helpful in correcting a number of misconceptions that have arisen concerning explicit advection schemes. In particular, explicit Eulerian schemes are widely believed to be inefficient because of stability constraints on the time step, dictated by a narrow interpretation of the CFL condition requiring the Courant number to be less than or equal to one. However, such constraints apply only to a particular class of advection schemes resulting for centering the stencil on the arrival point, when in fact the sole function of the stencil is to estimate the departure (sweep) point value - the arrival point has no relevance in determining the placement of the stencil. Unconditionally stable explicit Eulerian advection schemes are efficient and accurate, comparable in operation count to semi-Lagrangian schemes of the same order, but because of their flux-based formulation, they have the added advantage of being inherently conservative. Copyright
NASA Astrophysics Data System (ADS)
Tsuji, Takuya; Yokomine, Takehiko; Shimizu, Akihiko
2002-11-01
We have been engaged in the development of multi-scale adaptive simulation technique for incompressible turbulent flow. This is designed as that important scale components in the flow field are detected automatically by lifting wavelet and solved selectively. In conventional incompressible scheme, it is very common to solve Poisson equation of pressure to meet the divergence free constraints of incompressible flow. It may be not impossible to solve the Poisson eq. in the adaptive way, but this is very troublesome because it requires generation of control volume at each time step. We gave an eye on weakly compressible model proposed by Bao(2001). This model was derived from zero Mach limit asymptotic analysis of compressible Navier-Stokes eq. and does not need to solve the Poisson eq. at all. But it is relatively new and it requires demonstration study before the combination with the adaptation by wavelet. In present study, 2-D and 3-D Backstep flow were selected as test problems and applicability to turbulent flow is verified in detail. Besides, combination of adaptation by wavelet with weakly compressible model towards the adaptive turbulence simulation is discussed.
Patel, N.R.; Sturek, W.B.; Hiromoto, R.
1989-01-01
Parallel Navier-Stokes codes are developed to solve both two- dimensional and three-dimensional flow fields in and around ramjet and nose tip configurations. A multi-zone overlapped grid technique is used to extend an explicit finite-difference method to more complicated geometries. Parallel implementations are developed for execution on both distributed and common-memory multiprocessor architectures. For the steady-state solutions, the use of the local time-step method has the inherent advantage of reducing the communications overhead commonly incurred by parallel implementations. Computational results of the codes are given for a series of test problems. The parallel partitioning of computational zones is also discussed. 5 refs., 18 figs.
The basic function scheme of polynomial type
WU, Wang-yi; Lin, Guang
2009-12-01
A new numerical method---Basic Function Method is proposed. This method can directly discrete differential operator on unstructured grids. By using the expansion of basic function to approach the exact function, the central and upwind schemes of derivative are constructed. By using the second-order polynomial as basic function and applying the technique of flux splitting method and the combination of central and upwind schemes to suppress the non-physical fluctuation near the shock wave, the second-order basic function scheme of polynomial type for solving inviscid compressible flow numerically is constructed in this paper. Several numerical results of many typical examples for two dimensional inviscid compressible transonic and supersonic steady flow illustrate that it is a new scheme with high accuracy and high resolution for shock wave. Especially, combining with the adaptive remeshing technique, the satisfactory results can be obtained by these schemes.
Finite-volume scheme for anisotropic diffusion
Es, Bram van; Koren, Barry; Blank, Hugo J. de
2016-02-01
In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.
Nonhydrostatic adaptive mesh dynamics for multiscale climate models (Invited)
NASA Astrophysics Data System (ADS)
Collins, W.; Johansen, H.; McCorquodale, P.; Colella, P.; Ullrich, P. A.
2013-12-01
Many of the atmospheric phenomena with the greatest potential impact in future warmer climates are inherently multiscale. Such meteorological systems include hurricanes and tropical cyclones, atmospheric rivers, and other types of hydrometeorological extremes. These phenomena are challenging to simulate in conventional climate models due to the relatively coarse uniform model resolutions relative to the native nonhydrostatic scales of the phenomonological dynamics. To enable studies of these systems with sufficient local resolution for the multiscale dynamics yet with sufficient speed for climate-change studies, we have adapted existing adaptive mesh dynamics for the DOE-NSF Community Atmosphere Model (CAM). In this talk, we present an adaptive, conservative finite volume approach for moist non-hydrostatic atmospheric dynamics. The approach is based on the compressible Euler equations on 3D thin spherical shells, where the radial direction is treated implicitly (using a fourth-order Runga-Kutta IMEX scheme) to eliminate time step constraints from vertical acoustic waves. Refinement is performed only in the horizontal directions. The spatial discretization is the equiangular cubed-sphere mapping, with a fourth-order accurate discretization to compute flux averages on faces. By using both space-and time-adaptive mesh refinement, the solver allocates computational effort only where greater accuracy is needed. The resulting method is demonstrated to be fourth-order accurate for model problems, and robust at solution discontinuities and stable for large aspect ratios. We present comparisons using a simplified physics package for dycore comparisons of moist physics. Hadley cell lifting an advected tracer into upper atmosphere, with horizontal adaptivity
Wicaksono, D.; Zerkak, O.; Nikitin, K.; Ferroukhi, H.; Chawla, R.
2013-07-01
This paper reports refinement studies on the temporal coupling scheme and time-stepping management of TRACE/S3K, a dynamically coupled code version of the thermal-hydraulics system code TRACE and the 3D core simulator Simulate-3K. The studies were carried out for two test cases, namely a PWR rod ejection accident and the Peach Bottom 2 Turbine Trip Test 2. The solution of the coupled calculation, especially the power peak, proves to be very sensitive to the time-step size with the currently employed conventional operator-splitting. Furthermore, a very small time-step size is necessary to achieve decent accuracy. This degrades the trade-off between accuracy and performance. A simple and computationally cheap implementation of time-projection of power has been shown to be able to improve the convergence of the coupled calculation. This scheme is able to achieve a prescribed accuracy with a larger time-step size. (authors)
On the Security of Provably Secure Multi-Receiver ID-Based Signcryption Scheme
NASA Astrophysics Data System (ADS)
Tan, Chik-How
Recently, Duan and Cao proposed an multi-receiver identity-based signcryption scheme. They showed that their scheme is secure against adaptive chosen ciphertext attacks in the random oracle model. In this paper, we show that their scheme is in fact not secure against adaptive chosen ciphertext attacks under their defined security model.
The Dynamics of Some Iterative Implicit Schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1994-01-01
The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations is analyzed using the theory of dynamical systems. With the aid of parallel Connection Machines (CM-2 and CM-5), the associated bifurcation diagrams as a function of the time step, and the complex behavior of the associated 'numerical basins of attraction' of these iterative implicit schemes are revealed and compared. Studies showed that all of the four implicit LMMs exhibit a drastic distortion and segmentation but less shrinkage of the basin of attraction of the true solution than standard explicit methods. The numerical basins of attraction of a noniterative implicit procedure mimic more closely the basins of attraction of the differential equations than the iterative implicit procedures for the four implicit LMMs.
A gas-kinetic BGK scheme for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Xu, Kun
2000-01-01
This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.
PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM
Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org
2012-05-01
We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of
NASA Astrophysics Data System (ADS)
Deng, Fang
This dissertation centers on the development of a modeling environment to predict the performance and operating characteristics of salient-pole synchronous generators. The model basically consists of an algorithm consisting of two sections, a time stepping two-dimensional (2D) magnetostatic field finite element (FE) computation algorithm coupled to a state-space (SS) time-domain model of the winding circuits. Hence the term time stepping Coupled Finite Element-State Space (CFE-SS) modeling environment is adopted for this approach. In the FE section, magnetic vector potential (MVP) based finite element (FE) formulations and computation of two-dimensional (2D) magnetostatic fields are used to get the magnetic field solutions throughout a machine's cross-section at a sequence (samplings) of rotor positions covering a complete (360 deg e) ac cycle. These field solutions yield the winding inductances by means of an energy and current perturbation method. The output of the FE section is the magnetic field solutions and the entire set of phase, field, damper, and sleeve winding inductance profiles versus rotor position, including all space harmonics due to rotor saliency, damper bar slotting, sleeve segmentation, stator slotting, and magnetic saturation. These inductance profiles are decomposed into their harmonic components by Fourier analysis. The magnetic field solutions and resulting winding inductances represent the key input data to the SS portion of the CFE-SS modeling environment. Laminated machine iron core loss calculations, which include the losses in the stator and rotor as well as pole face are subsequently performed using the magnetic field solution data. Conversely, the output of the SS portion is the entire set of phase, field, damper winding (circuit), and sleeve segment currents, which also include all the resulting time harmonics. These winding current results form in turn the key input data to the FE portion of the modeling environment which is
Homman, Ahmed-Amine; Maillet, Jean-Bernard; Roussel, Julien; Stoltz, Gabriel
2016-01-14
This work presents new parallelizable numerical schemes for the integration of dissipative particle dynamics with energy conservation. So far, no numerical scheme introduced in the literature is able to correctly preserve the energy over long times and give rise to small errors on average properties for moderately small time steps, while being straightforwardly parallelizable. We present in this article two new methods, both straightforwardly parallelizable, allowing to correctly preserve the total energy of the system. We illustrate the accuracy and performance of these new schemes both on equilibrium and nonequilibrium parallel simulations.
NASA Astrophysics Data System (ADS)
Zhu, Lianhua; Wang, Peng; Guo, Zhaoli
2017-03-01
The general characteristics based off-lattice Boltzmann scheme proposed by Bardow et al. [1] (hereafter Bardow's scheme) and the discrete unified gas kinetic scheme (DUGKS) [2] are two methods that successfully overcome the time step restriction by the collision time, which is commonly seen in many other kinetic schemes. In this work, we first perform a theoretical analysis of the two schemes in the finite volume framework by comparing their numerical flux evaluations. It is found that the effect of collision term is considered in the evaluation of the cell interface distribution function in both schemes, which explains why they can overcome the time step restriction and can give accurate results even as the time step is much larger than the collision time. The difference between the two schemes lies in the treatment of the integral of the collision term when evaluating the cell interface distribution function, in which Bardow's scheme uses the rectangular rule while DUGKS uses the trapezoidal rule. The performance of the two schemes, i.e., accuracy, stability, and efficiency are then compared by simulating several two dimensional flows, including the unsteady Taylor-Green vortex flow, the steady lid-driven cavity flow, and the laminar boundary layer problem. It is observed that, DUGKS can give more accurate results than Bardow's scheme with a same mesh size. Furthermore, the numerical stability of Bardow's scheme decreases as the Courant-Friedrichs-Lewy (CFL) number approaches to 1, while the stability of DUGKS is not affected by the CFL number apparently as long as CFL < 1. It is also observed that DUGKS is twice as expensive as the Bardow's scheme with the same mesh size.
Yu, Hong-Zhou; Sen, Jin; Di, Xue-Ying
2013-06-01
By using the equilibrium moisture content-time lag methods of Nelson and Simard and the meteorological element regression method, this paper studied the dynamics of the moisture content of ground surface fine dead fuels under a Larix gmelinii stand on the sunny slope in Daxing' anling with a time interval of one hour, established the corresponding prediction models, and analyzed the prediction errors under different understory densities. The results showed that the prediction methods of the fuels moisture content based on one-hour time step were applicable for the typical Larix gmelinii stand in Daxing' anling. The mean absolute error and the mean relative error of Simard method was 1.1% and 8.5%, respectively, being lower than those of Nelson method and meteorological element regression method, and close to those of similar studies. On the same slopes and slope positions, the fuel moisture content varied with different understory densities, and thus, it would be necessary to select the appropriate equilibrium moisture content model for specific regional stand and position, or establish the fuel moisture content model based on specific stand when the dynamics of fuel moisture content would be investigated with a time interval of one hour.
Jensen, Benjamin D; Wise, Kristopher E; Odegard, Gregory M
2015-08-05
As the sophistication of reactive force fields for molecular modeling continues to increase, their use and applicability has also expanded, sometimes beyond the scope of their original development. Reax Force Field (ReaxFF), for example, was originally developed to model chemical reactions, but is a promising candidate for modeling fracture because of its ability to treat covalent bond cleavage. Performing reliable simulations of a complex process like fracture, however, requires an understanding of the effects that various modeling parameters have on the behavior of the system. This work assesses the effects of time step size, thermostat algorithm and coupling coefficient, and strain rate on the fracture behavior of three carbon-based materials: graphene, diamond, and a carbon nanotube. It is determined that the simulated stress-strain behavior is relatively independent of the thermostat algorithm, so long as coupling coefficients are kept above a certain threshold. Likewise, the stress-strain response of the materials was also independent of the strain rate, if it is kept below a maximum strain rate. Finally, the mechanical properties of the materials predicted by the Chenoweth C/H/O parameterization for ReaxFF are compared with literature values. Some deficiencies in the Chenoweth C/H/O parameterization for predicting mechanical properties of carbon materials are observed.
Adaptive spacetime method using Riemann jump conditions for coupled atomistic-continuum dynamics
Kraczek, B. Miller, S.T. Haber, R.B. Johnson, D.D.
2010-03-20
We combine the Spacetime Discontinuous Galerkin (SDG) method for elastodynamics with the mathematically consistent Atomistic Discontinuous Galerkin (ADG) method in a new scheme that concurrently couples continuum and atomistic models of dynamic response in solids. The formulation couples non-overlapping continuum and atomistic models across sharp interfaces by weakly enforcing jump conditions, for both momentum balance and kinematic compatibility, using Riemann values to preserve the characteristic structure of the underlying hyperbolic system. Momentum balances to within machine-precision accuracy over every element, on each atom, and over the coupled system, with small, controllable energy dissipation in the continuum region that ensures numerical stability. When implemented on suitable unstructured spacetime grids, the continuum SDG model offers linear computational complexity in the number of elements and powerful adaptive analysis capabilities that readily bridge between atomic and continuum scales in both space and time. A special trace operator for the atomic velocities and an associated atomistic traction field enter the jump conditions at the coupling interface. The trace operator depends on parameters that specify, at the scale of the atomic spacing, the position of the coupling interface relative to the atoms. In a key finding, we demonstrate that optimizing these parameters suppresses spurious reflections at the coupling interface without the use of non-physical damping or special boundary conditions. We formulate the implicit SDG-ADG coupling scheme in up to three spatial dimensions, and describe an efficient iterative solution scheme that outperforms common explicit schemes, such as the Velocity Verlet integrator. Numerical examples, in 1dxtime and employing both linear and nonlinear potentials, demonstrate the performance of the SDG-ADG method and show how adaptive spacetime meshing reconciles disparate time steps and resolves atomic-scale signals in
Advection Scheme for Phase-changing Porous Media Flow of Fluids with Large Density Ratio
NASA Astrophysics Data System (ADS)
Zhang, Duan; Padrino, Juan
2015-11-01
Many flows in a porous media involve phase changes between fluids with a large density ratio. For instance, in the water-steam phase change the density ratio is about 1000. These phase changes can be results of physical changes, or chemical reactions, such as fuel combustion in a porous media. Based on the mass conservation, the velocity ratio between the fluids is of the same order of the density ratio. As the result the controlling Courant number for the time step in a numerical simulation is determined by the high velocity and low density phase, leading to small time steps. In this work we introduce a numerical approximation to increase the time step by taking advantage of the large density ratio. We provide analytical error estimation for this approximate numerical scheme. Numerical examples show that using this approximation about 40-fold speedup can be achieved at the cost of a few percent error. Work partially supported by LDRD project of LANL.
Application of modified Patankar schemes to stiff biogeochemical models for the water column
NASA Astrophysics Data System (ADS)
Burchard, Hans; Deleersnijder, Eric; Meister, Andreas
2005-12-01
In this paper, we apply recently developed positivity preserving and conservative Modified Patankar-type solvers for ordinary differential equations to a simple stiff biogeochemical model for the water column. The performance of this scheme is compared to schemes which are not unconditionally positivity preserving (the first-order Euler and the second- and fourth-order Runge-Kutta schemes) and to schemes which are not conservative (the first- and second-order Patankar schemes). The biogeochemical model chosen as a test ground is a standard nutrient-phytoplankton-zooplankton-detritus (NPZD) model, which has been made stiff by substantially decreasing the half saturation concentration for nutrients. For evaluating the stiffness of the biogeochemical model, so-called numerical time scales are defined which are obtained empirically by applying high-resolution numerical schemes. For all ODE solvers under investigation, the temporal error is analysed for a simple exponential decay law. The performance of all schemes is compared to a high-resolution high-order reference solution. As a result, the second-order modified Patankar-Runge-Kutta scheme gives a good agreement with the reference solution even for time steps 10 times longer than the shortest numerical time scale of the problem. Other schemes do either compute negative values for non-negative state variables (fully explicit schemes), violate conservation (the Patankar schemes) or show low accuracy (all first-order schemes).
Central difference TVD and TVB schemes for time dependent and steady state problems
NASA Technical Reports Server (NTRS)
Jorgenson, P.; Turkel, E.
1992-01-01
We use central differences to solve the time dependent Euler equations. The schemes are all advanced using a Runge-Kutta formula in time. Near shocks, a second difference is added as an artificial viscosity. This reduces the scheme to a first order upwind scheme at shocks. The switch that is used guarantees that the scheme is locally total variation diminishing (TVD). For steady state problems it is usually advantageous to relax this condition. Then small oscillations do not activate the switches and the convergence to a steady state is improved. To sharpen the shocks, different coefficients are needed for different equations and so a matrix valued dissipation is introduced and compared with the scalar viscosity. The connection between this artificial viscosity and flux limiters is shown. Any flux limiter can be used as the basis of a shock detector for an artificial viscosity. We compare the use of the van Leer, van Albada, mimmod, superbee, and the 'average' flux limiters for this central difference scheme. For time dependent problems, we need to use a small enough time step so that the CFL was less than one even though the scheme was linearly stable for larger time steps. Using a total variation bounded (TVB) Runge-Kutta scheme yields minor improvements in the accuracy.
Ranking Schemes in Hybrid Boolean Systems: A New Approach.
ERIC Educational Resources Information Center
Savoy, Jacques
1997-01-01
Suggests a new ranking scheme especially adapted for hypertext environments in order to produce more effective retrieval results and still use Boolean search strategies. Topics include Boolean ranking schemes; single-term indexing and term weighting; fuzzy set theory extension; and citation indexing. (64 references) (Author/LRW)
NASA Astrophysics Data System (ADS)
Qiu, Zhongfeng; Doglioli, Andrea M.; He, Yijun; Carlotti, Francois
2011-03-01
This paper presents two comparisons or tests for a Lagrangian model of zooplankton dispersion: numerical schemes and time steps. Firstly, we compared three numerical schemes using idealized circulations. Results show that the precisions of the advanced Adams-Bashfold-Moulton (ABM) method and the Runge-Kutta (RK) method were in the same order and both were much higher than that of the Euler method. Furthermore, the advanced ABM method is more efficient than the RK method in computational memory requirements and time consumption. We therefore chose the advanced ABM method as the Lagrangian particle-tracking algorithm. Secondly, we performed a sensitivity test for time steps, using outputs of the hydrodynamic model, Symphonie. Results show that the time step choices depend on the fluid response time that is related to the spatial resolution of velocity fields. The method introduced by Oliveira et al. in 2002 is suitable for choosing time steps of Lagrangian particle-tracking models, at least when only considering advection.
Adaptive Gaussian Pattern Classification
1988-08-01
redundant model of the data to be used in classification . There are two classes of learning, or adaptation schemes. The first, unsupervised learning...37, No. 3, pp. 242-247, 1983. [2] E. F. Codd, Cellular Automata , Academic Press, 1968. [31 H. Everett, G. Gilbreath, S. Alderson, D. J. Marchette...Na al Oca aytm aete !JTI FL E COPY AD-A 199 030 Technical Document 1335 August 1988 Adaptive Gaussian Pattern Classif ication C. E. Priebe D. J
Identification Schemes from Key Encapsulation Mechanisms
NASA Astrophysics Data System (ADS)
Anada, Hiroaki; Arita, Seiko
We propose a generic conversion from a key encapsulation mechanism (KEM) to an identification (ID) scheme. The conversion derives the security for ID schemes against concurrent man-in-the-middle (cMiM) attacks from the security for KEMs against adaptive chosen ciphertext attacks on one-wayness (one-way-CCA2). Then, regarding the derivation as a design principle of ID schemes, we develop a series of concrete one-way-CCA2 secure KEMs. We start with El Gamal KEM and prove it secure against non-adaptive chosen ciphertext attacks on one-wayness (one-way-CCA1) in the standard model. Then, we apply a tag framework with the algebraic trick of Boneh and Boyen to make it one-way-CCA2 secure based on the Gap-CDH assumption. Next, we apply the CHK transformation or a target collision resistant hash function to exit the tag framework. And finally, as it is better to rely on the CDH assumption rather than the Gap-CDH assumption, we apply the Twin DH technique of Cash, Kiltz and Shoup. The application is not “black box” and we do it by making the Twin DH technique compatible with the algebraic trick. The ID schemes obtained from our KEMs show the highest performance in both computational amount and message length compared with previously known ID schemes secure against concurrent man-in-the-middle attacks.
High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza R.; Nishikawa, Hiroaki
2014-01-01
In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.
A three-dimensional boundary layer scheme: stability and accuracy analyses
NASA Astrophysics Data System (ADS)
Horri-Naceur, Jalil; Buisine, Daniel
2002-03-01
We present a numerical scheme for the calculation of incompressible three-dimensional boundary layers (3DBL), designed to take advantage of the 3DBL model's overall hyperbolic nature, which is linked to the existence of wedge-shaped dependence and influence zones. The proposed scheme, explicit along the wall and implicit in the normal direction, allows large time steps, thus enabling fast convergence. In order to keep this partly implicit character, the control volumes for the mass and momentum balances are not staggered along the wall. This results in a lack of numerical viscosity, making the scheme unstable. The implementation of a numerical diffusion, suited to the local zone of influence, restores the stability of the boundary layer scheme while preserving second-order space accuracy. The purpose of this article is to present the analytical and numerical studies carried out to establish the scheme's accuracy and stability properties. Copyright
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes
NASA Technical Reports Server (NTRS)
Montarnal, Philippe; Shu, Chi-Wang
1998-01-01
In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
Analysis of triangular C-grid finite volume scheme for shallow water flows
NASA Astrophysics Data System (ADS)
Shirkhani, Hamidreza; Mohammadian, Abdolmajid; Seidou, Ousmane; Qiblawey, Hazim
2015-08-01
In this paper, a dispersion relation analysis is employed to investigate the finite volume triangular C-grid formulation for two-dimensional shallow-water equations. In addition, two proposed combinations of time-stepping methods with the C-grid spatial discretization are investigated. In the first part of this study, the C-grid spatial discretization scheme is assessed, and in the second part, fully discrete schemes are analyzed. Analysis of the semi-discretized scheme (i.e. only spatial discretization) shows that there is no damping associated with the spatial C-grid scheme, and its phase speed behavior is also acceptable for long and intermediate waves. The analytical dispersion analysis after considering the effect of time discretization shows that the Leap-Frog time stepping technique can improve the phase speed behavior of the numerical method; however it could not damp the shorter decelerated waves. The Adams-Bashforth technique leads to slower propagation of short and intermediate waves and it damps those waves with a slower propagating speed. The numerical solutions of various test problems also conform and are in good agreement with the analytical dispersion analysis. They also indicate that the Adams-Bashforth scheme exhibits faster convergence and more accurate results, respectively, when the spatial and temporal step size decreases. However, the Leap-Frog scheme is more stable with higher CFL numbers.
Novel discretization schemes for the numerical simulation of membrane dynamics
NASA Astrophysics Data System (ADS)
Kolsti, Kyle F.
Motivated by the demands of simulating flapping wings of Micro Air Vehicles, novel numerical methods were developed and evaluated for the dynamic simulation of membranes. For linear membranes, a mixed-form time-continuous Galerkin method was employed using trilinear space-time elements. Rather than time-marching, the entire space-time domain was discretized and solved simultaneously. Second-order rates of convergence in both space and time were observed in numerical studies. Slight high-frequency noise was filtered during post-processing. For geometrically nonlinear membranes, the model incorporated two new schemes that were independently developed and evaluated. Time marching was performed using quintic Hermite polynomials uniquely determined by end-point jerk constraints. The single-step, implicit scheme was significantly more accurate than the most common Newmark schemes. For a simple harmonic oscillator, the scheme was found to be symplectic, frequency-preserving, and conditionally stable. Time step size was limited by accuracy requirements rather than stability. The spatial discretization scheme employed a staggered grid, grouping of nonlinear terms, and polygon shape functions in a strong-form point collocation formulation. The observed rate of convergence was two for both displacement and strain. Validation against existing experimental data showed the method to be accurate until hyperelastic effects dominate.
Willcock, J J; Lumsdaine, A; Quinlan, D J
2008-08-19
Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.
Dynamic remedial action scheme using online transient stability analysis
NASA Astrophysics Data System (ADS)
Shrestha, Arun
Economic pressure and environmental factors have forced the modern power systems to operate closer to their stability limits. However, maintaining transient stability is a fundamental requirement for the operation of interconnected power systems. In North America, power systems are planned and operated to withstand the loss of any single or multiple elements without violating North American Electric Reliability Corporation (NERC) system performance criteria. For a contingency resulting in the loss of multiple elements (Category C), emergency transient stability controls may be necessary to stabilize the power system. Emergency control is designed to sense abnormal conditions and subsequently take pre-determined remedial actions to prevent instability. Commonly known as either Remedial Action Schemes (RAS) or as Special/System Protection Schemes (SPS), these emergency control approaches have been extensively adopted by utilities. RAS are designed to address specific problems, e.g. to increase power transfer, to provide reactive support, to address generator instability, to limit thermal overloads, etc. Possible remedial actions include generator tripping, load shedding, capacitor and reactor switching, static VAR control, etc. Among various RAS types, generation shedding is the most effective and widely used emergency control means for maintaining system stability. In this dissertation, an optimal power flow (OPF)-based generation-shedding RAS is proposed. This scheme uses online transient stability calculation and generator cost function to determine appropriate remedial actions. For transient stability calculation, SIngle Machine Equivalent (SIME) technique is used, which reduces the multimachine power system model to a One-Machine Infinite Bus (OMIB) equivalent and identifies critical machines. Unlike conventional RAS, which are designed using offline simulations, online stability calculations make the proposed RAS dynamic and adapting to any power system
Bae, Soo Ya; Hong, Song-You; Lim, Kyo-Sun Sunny
2016-01-01
A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and it is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. A spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.
Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny
2016-01-01
A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and itmore » is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.« less
Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny
2016-01-01
A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and it is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.
NASA Astrophysics Data System (ADS)
Jauberteau, F.; Temam, R. M.; Tribbia, J.
2014-08-01
In this paper, we study several multiscale/fractional step schemes for the numerical solution of the rotating shallow water equations with complex topography. We consider the case of periodic boundary conditions (f-plane model). Spatial discretization is obtained using a Fourier spectral Galerkin method. For the schemes presented in this paper we consider two approaches. The first approach (multiscale schemes) is based on topography scale separation and the numerical time integration is function of the scales. The second approach is based on a splitting of the operators, and the time integration method is function of the operator considered (fractional step schemes). The numerical results obtained are compared with the explicit reference scheme (Leap-Frog scheme). With these multiscale/fractional step schemes the objective is to propose new schemes giving numerical results similar to those obtained using only one uniform fine grid N×N and a time step Δt, but with a CPU time near the CPU time needed when using only one coarse grid N1×N1, N1
NASA Technical Reports Server (NTRS)
Banks, D. W.; Hafez, M. M.
1996-01-01
Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.
Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.
2014-07-25
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.
NASA Astrophysics Data System (ADS)
Placidi, M.; Jung, J.-Y.; Ratti, A.; Sun, C.
2014-12-01
This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.
NASA Astrophysics Data System (ADS)
Willkofer, Florian; Wood, Raul R.; Schmid, Josef; von Trentini, Fabian; Ludwig, Ralf
2016-04-01
The ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) focuses on the effects of climate change on hydro-meteorological extreme events and their implications for water management in Bavaria and Québec. It builds on the conjoint analysis of a large ensemble of the CRCM5, driven by 50 members of the CanESM2, and the latest information provided through the CORDEX-initiative, to better assess the influence of natural climate variability and climatic change on the dynamics of extreme events. A critical point in the entire project is the preparation of a meteorological reference dataset with the required temporal (1-6h) and spatial (500m) resolution to be able to better evaluate hydrological extreme events in mesoscale river basins. For Bavaria a first reference data set (daily, 1km) used for bias-correction of RCM data was created by combining raster based data (E-OBS [1], HYRAS [2], MARS [3]) and interpolated station data using the meteorological interpolation schemes of the hydrological model WaSiM [4]. Apart from the coarse temporal and spatial resolution, this mosaic of different data sources is considered rather inconsistent and hence, not applicable for modeling of hydrological extreme events. Thus, the objective is to create a dataset with hourly data of temperature, precipitation, radiation, relative humidity and wind speed, which is then used for bias-correction of the RCM data being used as driver for hydrological modeling in the river basins. Therefore, daily data is disaggregated to hourly time steps using the 'Method of fragments' approach [5], based on available training stations. The disaggregation chooses fragments of daily values from observed hourly datasets, based on similarities in magnitude and behavior of previous and subsequent events. The choice of a certain reference station (hourly data, provision of fragments) for disaggregating daily station data (application
A semi-Lagrangian finite difference WENO scheme for scalar nonlinear conservation laws
NASA Astrophysics Data System (ADS)
Huang, Chieh-Sen; Arbogast, Todd; Hung, Chen-Hui
2016-10-01
For a nonlinear scalar conservation law in one-space dimension, we develop a locally conservative semi-Lagrangian finite difference scheme based on weighted essentially non-oscillatory reconstructions (SL-WENO). This scheme has the advantages of both WENO and semi-Lagrangian schemes. It is a locally mass conservative finite difference scheme, it is formally high-order accurate in space, it has small time truncation error, and it is essentially non-oscillatory. The scheme is nearly free of a CFL time step stability restriction for linear problems, and it has a relaxed CFL condition for nonlinear problems. The scheme can be considered as an extension of the SL-WENO scheme of Qiu and Shu (2011) [2] developed for linear problems. The new scheme is based on a standard sliding average formulation with the flux function defined using WENO reconstructions of (semi-Lagrangian) characteristic tracings of grid points. To handle nonlinear problems, we use an approximate, locally frozen trace velocity and a flux correction step. A special two-stage WENO reconstruction procedure is developed that is biased to the upstream direction. A Strang splitting algorithm is used for higher-dimensional problems. Numerical results are provided to illustrate the performance of the scheme and verify its formal accuracy. Included are applications to the Vlasov-Poisson and guiding-center models of plasma flow.
Simple scheme for encoding and decoding a qubit in unknown state for various topological codes
Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał
2015-01-01
We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905
Implicit scheme for Maxwell equations solution in case of flat 3D domains
NASA Astrophysics Data System (ADS)
Boronina, Marina; Vshivkov, Vitaly
2016-02-01
We present a new finite-difference scheme for Maxwell's equations solution for three-dimensional domains with different scales in different directions. The stability condition of the standard leap-frog scheme requires decreasing of the time-step with decreasing of the minimal spatial step, which depends on the minimal domain size. We overcome the conditional stability by modifying the standard scheme adding implicitness in the direction of the smallest size. The new scheme satisfies the Gauss law for the electric and magnetic fields in the final- differences. The approximation order, the maintenance of the wave amplitude and propagation speed, the invariance of the wave propagation on angle with the coordinate axes are analyzed.
Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2016-06-01
This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.
Beyond first-order finite element schemes in micromagnetics
Kritsikis, E.; Vaysset, A.; Buda-Prejbeanu, L.D.; Toussaint, J.-C.
2014-01-01
Magnetization dynamics in ferromagnetic materials is ruled by the Landau–Lifshitz–Gilbert equation (LLG). Reliable schemes must conserve the magnetization norm, which is a nonconvex constraint, and be energy-decreasing unless there is pumping. Some of the authors previously devised a convergent finite element scheme that, by choice of an appropriate test space – the tangent plane to the magnetization – reduces to a linear problem at each time step. The scheme was however first-order in time. We claim it is not an intrinsic limitation, and the same approach can lead to efficient micromagnetic simulation. We show how the scheme order can be increased, and the nonlocal (magnetostatic) interactions be tackled in logarithmic time, by the fast multipole method or the non-uniform fast Fourier transform. Our implementation is called feeLLGood. A test-case of the National Institute of Standards and Technology is presented, then another one relevant to spin-transfer effects (the spin-torque oscillator)
Beyond first-order finite element schemes in micromagnetics
NASA Astrophysics Data System (ADS)
Kritsikis, E.; Vaysset, A.; Buda-Prejbeanu, L. D.; Alouges, F.; Toussaint, J.-C.
2014-01-01
Magnetization dynamics in ferromagnetic materials is ruled by the Landau-Lifshitz-Gilbert equation (LLG). Reliable schemes must conserve the magnetization norm, which is a nonconvex constraint, and be energy-decreasing unless there is pumping. Some of the authors previously devised a convergent finite element scheme that, by choice of an appropriate test space - the tangent plane to the magnetization - reduces to a linear problem at each time step. The scheme was however first-order in time. We claim it is not an intrinsic limitation, and the same approach can lead to efficient micromagnetic simulation. We show how the scheme order can be increased, and the nonlocal (magnetostatic) interactions be tackled in logarithmic time, by the fast multipole method or the non-uniform fast Fourier transform. Our implementation is called feeLLGood. A test-case of the National Institute of Standards and Technology is presented, then another one relevant to spin-transfer effects (the spin-torque oscillator).
NASA Astrophysics Data System (ADS)
Cinnella, P.; Content, C.
2016-12-01
Restrictions on the maximum allowable time step of explicit time integration methods for direct and large eddy simulations of compressible turbulent flows at high Reynolds numbers can be very severe, because of the extremely small space steps used close to solid walls to capture tiny and elongated boundary layer structures. A way of increasing stability limits is to use implicit time integration schemes. However, the price to pay is a higher computational cost per time step, higher discretization errors and lower parallel scalability. In quest for an implicit time scheme for scale-resolving simulations providing the best possible compromise between these opposite requirements, we develop a Runge-Kutta implicit residual smoothing (IRS) scheme of fourth-order accuracy, based on a bilaplacian operator. The implicit operator involves the inversion of scalar pentadiagonal systems, for which efficient parallel algorithms are available. The proposed method is assessed against two explicit and two implicit time integration techniques in terms of computational cost required to achieve a threshold level of accuracy. Precisely, the proposed time scheme is compared to four-stages and six-stages low-storage Runge-Kutta method, to the second-order IRS and to a second-order backward scheme solved by means of matrix-free quasi-exact Newton subiterations. Numerical results show that the proposed IRS scheme leads to reductions in computational time by a factor 3 to 5 for an accuracy comparable to that of the corresponding explicit Runge-Kutta scheme.
Location-adaptive transmission for indoor visible light communication
NASA Astrophysics Data System (ADS)
Wang, Chun-yue; Wang, Lang; Chi, Xue-fen
2016-01-01
A location-adaptive transmission scheme for indoor visible light communication (VLC) system is proposed in this paper. In this scheme, the symbol error rate ( SER) of less than 10-3 should be guaranteed. And the scheme is realized by the variable multilevel pulse-position modulation (MPPM), where the transmitters adaptively adjust the number of time slots n in the MPPM symbol according to the position of the receiver. The purpose of our scheme is to achieve the best data rate in the indoor different locations. The results show that the location-adaptive transmission scheme based on the variable MPPM is superior in the indoor VLC system.
Adaptive Dynamic Bayesian Networks
Ng, B M
2007-10-26
A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.
Adaptive Force Control in Compliant Motion
NASA Technical Reports Server (NTRS)
Seraji, H.
1994-01-01
This paper addresses the problem of controlling a manipulator in compliant motion while in contact with an environment having an unknown stiffness. Two classes of solutions are discussed: adaptive admittance control and adaptive compliance control. In both admittance and compliance control schemes, compensator adaptation is used to ensure a stable and uniform system performance.
Direct Adaptive Control Of An Industrial Robot
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Lee, Thomas; Delpech, Michel
1992-01-01
Decentralized direct adaptive control scheme for six-jointed industrial robot eliminates part of overall computational burden imposed by centralized controller and degrades performance of robot by reducing sampling rate. Control and controller-adaptation laws based on observed performance of manipulator: no need to model dynamics of robot. Adaptive controllers cope with uncertainties and variations in robot and payload.
Laser adaptive holographic hydrophone
Romashko, R V; Kulchin, Yu N; Bezruk, M N; Ermolaev, S A
2016-03-31
A new type of a laser hydrophone based on dynamic holograms, formed in a photorefractive crystal, is proposed and studied. It is shown that the use of dynamic holograms makes it unnecessary to use complex optical schemes and systems for electronic stabilisation of the interferometer operating point. This essentially simplifies the scheme of the laser hydrophone preserving its high sensitivity, which offers the possibility to use it under a strong variation of the environment parameters. The laser adaptive holographic hydrophone implemented at present possesses the sensitivity at a level of 3.3 mV Pa{sup -1} in the frequency range from 1 to 30 kHz. (laser hydrophones)
ERIC Educational Resources Information Center
Wheeler, Mary L.
1994-01-01
Discusses the study of identification codes and check-digit schemes as a way to show students a practical application of mathematics and introduce them to coding theory. Examples include postal service money orders, parcel tracking numbers, ISBN codes, bank identification numbers, and UPC codes. (MKR)
An expert system based intelligent control scheme for space bioreactors
NASA Technical Reports Server (NTRS)
San, Ka-Yiu
1988-01-01
An expert system based intelligent control scheme is being developed for the effective control and full automation of bioreactor systems in space. The scheme developed will have the capability to capture information from various resources including heuristic information from process researchers and operators. The knowledge base of the expert system should contain enough expertise to perform on-line system identification and thus be able to adapt the controllers accordingly with minimal human supervision.
Tetsu, Hiroyuki; Nakamoto, Taishi
2016-03-15
Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme, we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.
NASA Astrophysics Data System (ADS)
Tetsu, Hiroyuki; Nakamoto, Taishi
2016-03-01
Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton-Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme, we examined the scheme developed by Douglas & Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
Hybridization schemes for clusters
NASA Astrophysics Data System (ADS)
Wales, David J.
The concept of an optimum hybridization scheme for cluster compounds is developed with particular reference to electron counting. The prediction of electron counts for clusters and the interpretation of the bonding is shown to depend critically upon the presumed hybridization pattern of the cluster vertex atoms. This fact has not been properly appreciated in previous work, particularly in applications of Stone's tensor surface harmonic (TSH) theory, but is found to be a useful tool when dealt with directly. A quantitative definition is suggested for the optimum cluster hybridization pattern based directly upon the ease of interpretation of the molecular orbitals, and results are given for a range of species. The relationship of this scheme to the detailed cluster geometry is described using Löwdin's partitioned perturbation theory, and the success and range of application of TSH theory are discussed.
Scalable Nonlinear Compact Schemes
Ghosh, Debojyoti; Constantinescu, Emil M.; Brown, Jed
2014-04-01
In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.
High Order Finite Volume Nonlinear Schemes for the Boltzmann Transport Equation
Bihari, B L; Brown, P N
2005-03-29
The authors apply the nonlinear WENO (Weighted Essentially Nonoscillatory) scheme to the spatial discretization of the Boltzmann Transport Equation modeling linear particle transport. The method is a finite volume scheme which ensures not only conservation, but also provides for a more natural handling of boundary conditions, material properties and source terms, as well as an easier parallel implementation and post processing. It is nonlinear in the sense that the stencil depends on the solution at each time step or iteration level. By biasing the gradient calculation towards the stencil with smaller derivatives, the scheme eliminates the Gibb's phenomenon with oscillations of size O(1) and reduces them to O(h{sup r}), where h is the mesh size and r is the order of accuracy. The current implementation is three-dimensional, generalized for unequally spaced meshes, fully parallelized, and up to fifth order accurate (WENO5) in space. For unsteady problems, the resulting nonlinear spatial discretization yields a set of ODE's in time, which in turn is solved via high order implicit time-stepping with error control. For the steady-state case, they need to solve the non-linear system, typically by Newton-Krylov iterations. There are several numerical examples presented to demonstrate the accuracy, non-oscillatory nature and efficiency of these high order methods, in comparison with other fixed-stencil schemes.
Development of advanced control schemes for telerobot manipulators
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Zhou, Zhen-Lei
1991-01-01
To study space applications of telerobotics, Goddard Space Flight Center (NASA) has recently built a testbed composed mainly of a pair of redundant slave arms having seven degrees of freedom and a master hand controller system. The mathematical developments required for the computerized simulation study and motion control of the slave arms are presented. The slave arm forward kinematic transformation is presented which is derived using the D-H notation and is then reduced to its most simplified form suitable for real-time control applications. The vector cross product method is then applied to obtain the slave arm Jacobian matrix. Using the developed forward kinematic transformation and quaternions representation of the slave arm end-effector orientation, computer simulation is conducted to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of the Jacobian pseudo-inverse for various sampling times. In addition, the equivalence between Cartesian velocities and quaternion is also verified using computer simulation. The motion control of the slave arm is examined. Three control schemes, the joint-space adaptive control scheme, the Cartesian adaptive control scheme, and the hybrid position/force control scheme are proposed for controlling the motion of the slave arm end-effector. Development of the Cartesian adaptive control scheme is presented and some preliminary results of the remaining control schemes are presented and discussed.
Adaptive control of a Stewart platform-based manipulator
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Antrazi, Sami S.; Zhou, Zhen-Lei; Campbell, Charles E., Jr.
1993-01-01
A joint-space adaptive control scheme for controlling noncompliant motion of a Stewart platform-based manipulator (SPBM) was implemented in the Hardware Real-Time Emulator at Goddard Space Flight Center. The six-degrees of freedom SPBM uses two platforms and six linear actuators driven by dc motors. The adaptive control scheme is based on proportional-derivative controllers whose gains are adjusted by an adaptation law based on model reference adaptive control and Liapunov direct method. It is concluded that the adaptive control scheme provides superior tracking capability as compared to fixed-gain controllers.
A family of compact high order coupled time-space unconditionally stable vertical advection schemes
NASA Astrophysics Data System (ADS)
Lemarié, Florian; Debreu, Laurent
2016-04-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.
Implicit schemes and parallel computing in unstructured grid CFD
NASA Technical Reports Server (NTRS)
Venkatakrishnam, V.
1995-01-01
The development of implicit schemes for obtaining steady state solutions to the Euler and Navier-Stokes equations on unstructured grids is outlined. Applications are presented that compare the convergence characteristics of various implicit methods. Next, the development of explicit and implicit schemes to compute unsteady flows on unstructured grids is discussed. Next, the issues involved in parallelizing finite volume schemes on unstructured meshes in an MIMD (multiple instruction/multiple data stream) fashion are outlined. Techniques for partitioning unstructured grids among processors and for extracting parallelism in explicit and implicit solvers are discussed. Finally, some dynamic load balancing ideas, which are useful in adaptive transient computations, are presented.
A comparison of SPH schemes for the compressible Euler equations
NASA Astrophysics Data System (ADS)
Puri, Kunal; Ramachandran, Prabhu
2014-01-01
We review the current state-of-the-art Smoothed Particle Hydrodynamics (SPH) schemes for the compressible Euler equations. We identify three prototypical schemes and apply them to a suite of test problems in one and two dimensions. The schemes are in order, standard SPH with an adaptive density kernel estimation (ADKE) technique introduced Sigalotti et al. (2008) [44], the variational SPH formulation of Price (2012) [33] (referred herein as the MPM scheme) and the Godunov type SPH (GSPH) scheme of Inutsuka (2002) [12]. The tests investigate the accuracy of the inviscid discretizations, shock capturing ability and the particle settling behavior. The schemes are found to produce nearly identical results for the 1D shock tube problems with the MPM and GSPH schemes being the most robust. The ADKE scheme requires parameter values which must be tuned to the problem at hand. We propose an addition of an artificial heating term to the GSPH scheme to eliminate unphysical spikes in the thermal energy at the contact discontinuity. The resulting modification is simple and can be readily incorporated in existing codes. In two dimensions, the differences between the schemes is more evident with the quality of results determined by the particle distribution. In particular, the ADKE scheme shows signs of particle clumping and irregular motion for the 2D strong shock and Sedov point explosion tests. The noise in particle data is linked with the particle distribution which remains regular for the Hamiltonian formulations (MPM and GSPH) and becomes irregular for the ADKE scheme. In the interest of reproducibility, we make available our implementation of the algorithms and test problems discussed in this work.
NASA Astrophysics Data System (ADS)
Hunt, Jason Daniel
An adaptive three-dimensional Cartesian approach for the parallel computation of compressible flow about static and dynamic configurations has been developed and validated. This is a further step towards a goal that remains elusive for CFD codes: the ability to model complex dynamic-geometry problems in a quick and automated manner. The underlying flow-solution method solves the three-dimensional Euler equations using a MUSCL-type finite-volume approach to achieve higher-order spatial accuracy. The flow solution, either steady or unsteady, is advanced in time via a two-stage time-stepping scheme. This basic solution method has been incorporated into a parallel block-adaptive Cartesian framework, using a block-octtree data structure to represent varying spatial resolution, and to compute flow solutions in parallel. The ability to represent static geometric configurations has been introduced by cutting a geometric configuration out of a background block-adaptive Cartesian grid, then solving for the flow on the resulting volume grid. This approach has been extended for dynamic geometric configurations: components of a given configuration were permitted to independently move, according to prescribed rigid-body motion. Two flow-solver difficulties arise as a result of introducing static and dynamic configurations: small time steps; and the disappearance/appearance of cell volume during a time integration step. Both of these problems have been remedied through cell merging. The concept of cell merging and its implementation within the parallel block-adaptive method is described. While the parallelization of certain grid-generation and cell-cutting routines resulted from this work, the most significant contribution was developing the novel cell-merging paradigm that was incorporated into the parallel block-adaptive framework. Lastly, example simulations both to validate the developed method and to demonstrate its full capabilities have been carried out. A simple, steady
Massive momentum-subtraction scheme
NASA Astrophysics Data System (ADS)
Boyle, Peter; Del Debbio, Luigi; Khamseh, Ava
2017-03-01
A new renormalization scheme is defined for fermion bilinears in QCD at nonvanishing quark masses. This new scheme, denoted RI/mSMOM, preserves the benefits of the nonexceptional momenta introduced in the RI/SMOM scheme and allows a definition of renormalized composite fields away from the chiral limit. Some properties of the scheme are investigated by performing explicit one-loop computation in dimensional regularization.
Positivity-preserving numerical schemes for multidimensional advection
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1993-01-01
This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.
Yasas, F M
1977-01-01
In response to a United Nations resolution, the Mobile Training Scheme (MTS) was set up to provide training to the trainers of national cadres engaged in frontline and supervisory tasks in social welfare and rural development. The training is innovative in its being based on an analysis of field realities. The MTS team consisted of a leader, an expert on teaching methods and materials, and an expert on action research and evaluation. The country's trainers from different departments were sent to villages to work for a short period and to report their problems in fulfilling their roles. From these grass roots experiences, they made an analysis of the job, determining what knowledge, attitude and skills it required. Analysis of daily incidents and problems were used to produce indigenous teaching materials drawn from actual field practice. How to consider the problems encountered through government structures for policy making and decisions was also learned. Tasks of the students were to identify the skills needed for role performance by job analysis, daily diaries and project histories; to analyze the particular community by village profiles; to produce indigenous teaching materials; and to practice the role skills by actual role performance. The MTS scheme was tried in Nepal in 1974-75; 3 training programs trained 25 trainers and 51 frontline workers; indigenous teaching materials were created; technical papers written; and consultations were provided. In Afghanistan the scheme was used in 1975-76; 45 participants completed the training; seminars were held; and an ongoing Council was created. It is hoped that the training program will be expanded to other countries.
Foglietta, J.H.
1999-07-01
A new LNG cycle has been developed for base load liquefaction facilities. This new design offers a different technical and economical solution comparing in efficiency with the classical technologies. The new LNG scheme could offer attractive business opportunities to oil and gas companies that are trying to find paths to monetize gas sources more effectively; particularly for remote or offshore locations where smaller scale LNG facilities might be applicable. This design offers also an alternative route to classic LNG projects, as well as alternative fuel sources. Conceived to offer simplicity and access to industry standard equipment, This design is a hybrid result of combining a standard refrigeration system and turboexpander technology.
NASA Astrophysics Data System (ADS)
Etemadsaeed, Leila; Moczo, Peter; Kristek, Jozef; Ansari, Anooshiravan; Kristekova, Miriam
2016-10-01
We investigate the problem of finite-difference approximations of the velocity-stress formulation of the equation of motion and constitutive law on the staggered grid (SG) and collocated grid (CG). For approximating the first spatial and temporal derivatives, we use three approaches: Taylor expansion (TE), dispersion-relation preserving (DRP), and combined TE-DRP. The TE and DRP approaches represent two fundamental extremes. We derive useful formulae for DRP and TE-DRP approximations. We compare accuracy of the numerical wavenumbers and numerical frequencies of the basic TE, DRP and TE-DRP approximations. Based on the developed approximations, we construct and numerically investigate 14 basic TE, DRP and TE-DRP finite-difference schemes on SG and CG. We find that (1) the TE second-order in time, TE fourth-order in space, 2-point in time, 4-point in space SG scheme (that is the standard (2,4) VS SG scheme, say TE-2-4-2-4-SG) is the best scheme (of the 14 investigated) for large fractions of the maximum possible time step, or, in other words, in a homogeneous medium; (2) the TE second-order in time, combined TE-DRP second-order in space, 2-point in time, 4-point in space SG scheme (say TE-DRP-2-2-2-4-SG) is the best scheme for small fractions of the maximum possible time step, or, in other words, in models with large velocity contrasts if uniform spatial grid spacing and time step are used. The practical conclusion is that in computer codes based on standard TE-2-4-2-4-SG, it is enough to redefine the values of the approximation coefficients by those of TE-DRP-2-2-2-4-SG for increasing accuracy of modelling in models with large velocity contrast between rock and sediments.
Implicit approximate-factorization schemes for the low-frequency transonic equation
NASA Technical Reports Server (NTRS)
Ballhaus, W. F.; Steger, J. L.
1975-01-01
Two- and three-level implicit finite-difference algorithms for the low-frequency transonic small disturbance-equation are constructed using approximate factorization techniques. The schemes are unconditionally stable for the model linear problem. For nonlinear mixed flows, the schemes maintain stability by the use of conservatively switched difference operators for which stability is maintained only if shock propagation is restricted to be less than one spatial grid point per time step. The shock-capturing properties of the schemes were studied for various shock motions that might be encountered in problems of engineering interest. Computed results for a model airfoil problem that produces a flow field similar to that about a helicopter rotor in forward flight show the development of a shock wave and its subsequent propagation upstream off the front of the airfoil.
Numerical experiments with a symmetric high-resolution shock-capturing scheme
NASA Technical Reports Server (NTRS)
Yee, H. C.
1986-01-01
Characteristic-based explicit and implicit total variation diminishing (TVD) schemes for the two-dimensional compressible Euler equations have recently been developed. This is a generalization of recent work of Roe and Davis to a wider class of symmetric (non-upwind) TVD schemes other than Lax-Wendroff. The Roe and Davis schemes can be viewed as a subset of the class of explicit methods. The main properties of the present class of schemes are that they can be implicit, and, when steady-state calculations are sought, the numerical solution is independent of the time step. In a recent paper, a comparison of a linearized form of the present implicit symmetric TVD scheme with an implicit upwind TVD scheme originally developed by Harten and modified by Yee was given. Results favored the symmetric method. It was found that the latter is just as accurate as the upwind method while requiring less computational effort. Currently, more numerical experiments are being conducted on time-accurate calculations and on the effect of grid topology, numerical boundary condition procedures, and different flow conditions on the behavior of the method for steady-state applications. The purpose here is to report experiences with this type of scheme and give guidelines for its use.
A central-upwind scheme with artificial viscosity for shallow-water flows in channels
NASA Astrophysics Data System (ADS)
Hernandez-Duenas, Gerardo; Beljadid, Abdelaziz
2016-10-01
We develop a new high-resolution, non-oscillatory semi-discrete central-upwind scheme with artificial viscosity for shallow-water flows in channels with arbitrary geometry and variable topography. The artificial viscosity, proposed as an alternative to nonlinear limiters, allows us to use high-resolution reconstructions at a low computational cost. The scheme recognizes steady states at rest when a delicate balance between the source terms and flux gradients occurs. This balance in irregular geometries is more complex than that taking place in channels with vertical walls. A suitable technique is applied by properly taking into account the effects induced by the geometry. Incorporating the contributions of the artificial viscosity and an appropriate time step restriction, the scheme preserves the positivity of the water's depth. A description of the proposed scheme, its main properties as well as the proofs of well-balance and the positivity of the scheme are provided. Our numerical experiments confirm stability, well-balance, positivity-preserving properties and high resolution of the proposed method. Comparisons of numerical solutions obtained with the proposed scheme and experimental data are conducted, showing a good agreement. This scheme can be applied to shallow-water flows in channels with complex geometry and variable bed topography.
Uplink Access Schemes for LTE-Advanced
NASA Astrophysics Data System (ADS)
Liu, Le; Inoue, Takamichi; Koyanagi, Kenji; Kakura, Yoshikazu
The 3GPP LTE-Advanced has been attracting much attention recently, where the channel bandwidth would be beyond the maximum bandwidth of LTE, 20MHz. In LTE, single carrier-frequency division multiple access (SC-FDMA) was accepted as the uplink access scheme due to its advantage of very low cubic metric (CM). For LTE-A wideband transmission, multicarrier access would be more effective than single carrier access to make use of multi-user diversity and can maintain the physical channel structure of LTE, where the control information is transmitted on the edges of each 20MHz. In this paper, we discuss the access schemes in bandwidth under 20MHz as well as over 20MHz. In the case of bandwidth under 20MHz, we propose the access schemes allowing discontinuous resource allocation to enhance average throughput while maintaining cell-edge user throughput, that is, DFT-spread-OFDM with spectrum division control (SDC) and adaptive selection of SC-FDMA and OFDM (SC+OFDM). The number of discontinuous spectrums is denoted as spectrum division (SD). For DFT-S-OFDM, we define a parameter max SD as the upper limit of SD. We evaluate our proposed schemes in bandwidth under 20MHz and find that SC+OFDM as well as SDC with common max SD or UE-specific max SD can improve average throughput while their cell-edge user throughput can approach that of SC-FDMA. In the case of bandwidth over 20MHz, we consider key factors to decide a feasible access scheme for aggregating several 20MHz-wide bands.
Discrete unified gas kinetic scheme for all Knudsen number flows. II. Thermal compressible case.
Guo, Zhaoli; Wang, Ruijie; Xu, Kun
2015-03-01
This paper is a continuation of our work on the development of multiscale numerical scheme from low-speed isothermal flow to compressible flows at high Mach numbers. In our earlier work [Z. L. Guo et al., Phys. Rev. E 88, 033305 (2013)], a discrete unified gas kinetic scheme (DUGKS) was developed for low-speed flows in which the Mach number is small so that the flow is nearly incompressible. In the current work, we extend the scheme to compressible flows with the inclusion of thermal effect and shock discontinuity based on the gas kinetic Shakhov model. This method is an explicit finite-volume scheme with the coupling of particle transport and collision in the flux evaluation at a cell interface. As a result, the time step of the method is not limited by the particle collision time. With the variation of the ratio between the time step and particle collision time, the scheme is an asymptotic preserving (AP) method, where both the Chapman-Enskog expansion for the Navier-Stokes solution in the continuum regime and the free transport mechanism in the rarefied limit can be precisely recovered with a second-order accuracy in both space and time. The DUGKS is an idealized multiscale method for all Knudsen number flow simulations. A number of numerical tests, including the shock structure problem, the Sod tube problem in a whole range of degree of rarefaction, and the two-dimensional Riemann problem in both continuum and rarefied regimes, are performed to validate the scheme. Comparisons with the results of direct simulation Monte Carlo (DSMC) and other benchmark data demonstrate that the DUGKS is a reliable and efficient method for multiscale flow problems.
Parameter testing for lattice filter based adaptive modal control systems
NASA Technical Reports Server (NTRS)
Sundararajan, N.; Williams, J. P.; Montgomery, R. C.
1983-01-01
For Large Space Structures (LSS), an adaptive control system is highly desirable. The present investigation is concerned with an 'indirect' adaptive control scheme wherein the system order, mode shapes, and modal amplitudes are estimated on-line using an identification scheme based on recursive, least-squares, lattice filters. Using the identified model parameters, a modal control law based on a pole-placement scheme with the objective of vibration suppression is employed. A method is presented for closed loop adaptive control of a flexible free-free beam. The adaptive control scheme consists of a two stage identification scheme working in series and a modal pole placement control scheme. The main conclusion from the current study is that the identified parameters cannot be directly used for controller design purposes.
An improved SPH scheme for cosmological simulations
NASA Astrophysics Data System (ADS)
Beck, A. M.; Murante, G.; Arth, A.; Remus, R.-S.; Teklu, A. F.; Donnert, J. M. F.; Planelles, S.; Beck, M. C.; Förster, P.; Imgrund, M.; Dolag, K.; Borgani, S.
2016-01-01
We present an implementation of smoothed particle hydrodynamics (SPH) with improved accuracy for simulations of galaxies and the large-scale structure. In particular, we implement and test a vast majority of SPH improvement in the developer version of GADGET-3. We use the Wendland kernel functions, a particle wake-up time-step limiting mechanism and a time-dependent scheme for artificial viscosity including high-order gradient computation and shear flow limiter. Additionally, we include a novel prescription for time-dependent artificial conduction, which corrects for gravitationally induced pressure gradients and improves the SPH performance in capturing the development of gas-dynamical instabilities. We extensively test our new implementation in a wide range of hydrodynamical standard tests including weak and strong shocks as well as shear flows, turbulent spectra, gas mixing, hydrostatic equilibria and self-gravitating gas clouds. We jointly employ all modifications; however, when necessary we study the performance of individual code modules. We approximate hydrodynamical states more accurately and with significantly less noise than standard GADGET-SPH. Furthermore, the new implementation promotes the mixing of entropy between different fluid phases, also within cosmological simulations. Finally, we study the performance of the hydrodynamical solver in the context of radiative galaxy formation and non-radiative galaxy cluster formation. We find galactic discs to be colder and more extended and galaxy clusters showing entropy cores instead of steadily declining entropy profiles. In summary, we demonstrate that our improved SPH implementation overcomes most of the undesirable limitations of standard GADGET-SPH, thus becoming the core of an efficient code for large cosmological simulations.
A unified gas-kinetic scheme for continuum and rarefied flows
Xu Kun; Huang, J.-C.
2010-10-01
With discretized particle velocity space, a multiscale unified gas-kinetic scheme for entire Knudsen number flows is constructed based on the BGK model. The current scheme couples closely the update of macroscopic conservative variables with the update of microscopic gas distribution function within a time step. In comparison with many existing kinetic schemes for the Boltzmann equation, the current method has no difficulty to get accurate Navier-Stokes (NS) solutions in the continuum flow regime with a time step being much larger than the particle collision time. At the same time, the rarefied flow solution, even in the free molecule limit, can be captured accurately. The unified scheme is an extension of the gas-kinetic BGK-NS scheme from the continuum flow to the rarefied regime with the discretization of particle velocity space. The success of the method is due to the un-splitting treatment of the particle transport and collision in the evaluation of local solution of the gas distribution function. For these methods which use operator splitting technique to solve the transport and collision separately, it is usually required that the time step is less than the particle collision time. This constraint basically makes these methods useless in the continuum flow regime, especially in the high Reynolds number flow simulations. Theoretically, once the physical process of particle transport and collision is modeled statistically by the kinetic Boltzmann equation, the transport and collision become continuous operators in space and time, and their numerical discretization should be done consistently. Due to its multiscale nature of the unified scheme, in the update of macroscopic flow variables, the corresponding heat flux can be modified according to any realistic Prandtl number. Subsequently, this modification effects the equilibrium state in the next time level and the update of microscopic distribution function. Therefore, instead of modifying the collision term
Comparison of thresholding schemes for visible light communication using mobile-phone image sensor.
Liu, Yang; Chow, Chi-Wai; Liang, Kevin; Chen, Hung-Yu; Hsu, Chin-Wei; Chen, Chung-Yen; Chen, Shih-Hao
2016-02-08
Based on the rolling shutter effect of the complementary metal-oxide-semiconductor (CMOS) image sensor, bright and dark fringes can be observed in each received frame. By demodulating the bright and dark fringes, the visible light communication (VLC) data logic can be retrieved. However, demodulating the bright and dark fringes is challenging as there is a high data fluctuation and large extinction ratio (ER) variation in each frame due. Hence proper thresholding scheme is needed. In this work, we propose and compare experimentally three thresholding schemes; including third-order polynomial curve fitting, iterative scheme and quick adaptive scheme. The evaluation of these three thresholding schemes is performed.
The nonlinear modified equation approach to analyzing finite difference schemes
NASA Technical Reports Server (NTRS)
Klopfer, G. H.; Mcrae, D. S.
1981-01-01
The nonlinear modified equation approach is taken in this paper to analyze the generalized Lax-Wendroff explicit scheme approximation to the unsteady one- and two-dimensional equations of gas dynamics. Three important applications of the method are demonstrated. The nonlinear modified equation analysis is used to (1) generate higher order accurate schemes, (2) obtain more accurate estimates of the discretization error for nonlinear systems of partial differential equations, and (3) generate an adaptive mesh procedure for the unsteady gas dynamic equations. Results are obtained for all three areas. For the adaptive mesh procedure, mesh point requirements for equal resolution of discontinuities were reduced by a factor of five for a 1-D shock tube problem solved by the explicit MacCormack scheme.
NASA Technical Reports Server (NTRS)
Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James
2014-01-01
A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.
NASA Astrophysics Data System (ADS)
Gillibrand, P. A.; Herzfeld, M.
2016-05-01
We present a flux-form semi-Lagrangian (FFSL) advection scheme designed for offline scalar transport simulation with coastal ocean models using curvilinear horizontal coordinates. The scheme conserves mass, overcoming problems of mass conservation typically experienced with offline transport models, and permits long time steps (relative to the Courant number) to be used by the offline model. These attributes make the method attractive for offline simulation of tracers in biogeochemical or sediment transport models using archived flow fields from hydrodynamic models. We describe the FFSL scheme, and test it on two idealised domains and one real domain, the Great Barrier Reef in Australia. For comparison, we also include simulations using a traditional semi-Lagrangian advection scheme for the offline simulations. We compare tracer distributions predicted by the offline FFSL transport scheme with those predicted by the original hydrodynamic model, assess the conservation of mass in all cases and contrast the computational efficiency of the schemes. We find that the FFSL scheme produced very good agreement with the distributions of tracer predicted by the hydrodynamic model, and conserved mass with an error of a fraction of one percent. In terms of computational speed, the FFSL scheme was comparable with the semi-Lagrangian method and an order of magnitude faster than the full hydrodynamic model, even when the latter ran in parallel on multiple cores. The FFSL scheme presented here therefore offers a viable mass-conserving and computationally-efficient alternative to traditional semi-Lagrangian schemes for offline scalar transport simulation in coastal models.
Webster, Michael A.
2015-01-01
Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985
Analysis of an antijam FH acquisition scheme
NASA Astrophysics Data System (ADS)
Miller, Leonard E.; Lee, Jhong S.; French, Robert H.; Torrieri, Don J.
1992-01-01
An easily implemented matched filter scheme for acquiring hopping code synchronization of incoming frequency-hopping (FH) signals is analyzed, and its performance is evaluated for two types of jamming: partial-band noise jamming and partial-band multitone jamming. The system is designed to reduce jammer-induced false alarms. The system's matched-filter output is compared to an adaptive threshold that is derived from a measurement of the number of acquisition channels being jammed. Example performance calculations are given for the frequency coverage of the jamming either fixed over the entire acquisition period or hopped, that is, changed for each acquisition pulse. It is shown that the jammer's optimum strategy (the worst case) is to maximize the false-alarm probability without regard for the effect on detection probability, for both partial-band noise and multi-tone jamming. It is also shown that a significantly lower probability of false acquisition results from using an adaptive matched-filter threshold, demonstrating that the strategy studied here is superior to conventional nonadaptive threshold schemes.
An adaptive routing scheme in scale-free networks
NASA Astrophysics Data System (ADS)
Ben Haddou, Nora; Ez-Zahraouy, Hamid; Benyoussef, Abdelilah
2015-05-01
We suggest an optimal form of traffic awareness already introduced as a routing protocol which combines structural and local dynamic properties of the network to determine the followed path between source and destination of the packet. Instead of using the shortest path, we incorporate the "efficient path" in the protocol and we propose a new parameter α that controls the contribution of the queue in the routing process. Compared to the original model, the capacity of the network can be improved more than twice when using the optimal conditions of our model. Moreover, the adjustment of the proposed parameter allows the minimization of the travel time.
An energy conservative difference scheme for the nonlinear fractional Schrödinger equations
NASA Astrophysics Data System (ADS)
Wang, Pengde; Huang, Chengming
2015-07-01
In this paper, an energy conservative Crank-Nicolson difference scheme for nonlinear Riesz space-fractional Schrödinger equations is studied. We give a rigorous analysis of the conservation properties, including mass conservation and energy conservation in the discrete sense. Based on Brouwer fixed point theorem, the existence of the difference solution is proved. By virtue of the energy method, the difference solution is shown to be unique and convergent at the order of O (τ2 +h2) in the l2-norm with time step τ and mesh size h. Finally a linearized iterative algorithm is presented and numerical experiments are given to confirm the theoretical results.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.
Seny, Bruno Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François
2014-01-01
Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
Chaotic communication scheme with multiplication
NASA Astrophysics Data System (ADS)
Bobreshov, A. M.; Karavaev, A. A.
2007-05-01
A new scheme of data transmission with nonlinear admixing is described, in which the two mutually inverse operations (multiplication and division) ensure multiplicative mixing of the informative and chaotic signals that provides a potentially higher degree of security. A special feature of the proposed scheme is the absence of limitations (related to the division by zero) imposed on the types of informative signals.
An assessment of semi-discrete central schemes for hyperbolic conservation laws.
Christon, Mark Allen; Robinson, Allen Conrad; Ketcheson, David Isaac
2003-09-01
clearly outperforms the central schemes in terms of accuracy at a given grid resolution and the cost of additional complexity in the numerical flux functions. Overall we have observed that the finite volume schemes, implemented within a well-designed framework, are extremely efficient with (potentially) very low memory storage. Finally, we have found by computational experiment that second and third-order strong-stability preserving (SSP) time integration methods with the number of stages greater than the order provide a useful enhanced stability region. However, we observe that non-SSP and non-optimal SSP schemes with SSP factors less than one can still be very useful if used with time-steps below the standard CFL limit. The 'well-designed' integration schemes that we have examined appear to perform well in all instances where the time step is maintained below the standard physical CFL limit.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.
An evaluation of temporally adaptive transformation approaches for solving Richards' equation
NASA Astrophysics Data System (ADS)
Williams, Glenn A.; Miller, Cass T.
Developing robust and efficient numerical solution methods for Richards' equation (RE) continues to be a challenge for certain problems. We consider such a problem here: infiltration into unsaturated porous media initially at static conditions for uniform and non-uniform pore size media. For ponded boundary conditions, a sharp infiltration front results, which propagates through the media. We evaluate the resultant solution method for robustness and efficiency using combinations of variable transformation and adaptive time-stepping methods. Transformation methods introduce a change of variable that results in a smoother solution, which is more amenable to efficient numerical solution. We use adaptive time-stepping methods to adjust the time-step size, and in some cases the order of the solution method, to meet a constraint on nonlinear solution convergence properties or a solution error criterion. Results for three test problems showed that adaptive time-stepping methods provided robust solutions; in most cases transforming the dependent variable led to more efficient solutions than untransformed approaches, especially as the pore-size uniformity increased; and the higher-order adaptive time integration method was robust and the most efficient method evaluated.
NASA Astrophysics Data System (ADS)
Moczo, P.; Kristek, J.; Galis, M.; Pazak, P.
2009-12-01
Numerical prediction of earthquake ground motion in sedimentary basins and valleys often has to account for P-wave to S-wave speed ratios (Vp/Vs) as large as 5 and even larger, mainly in sediments below groundwater level. The ratio can attain values larger than 10 in unconsolidated sediments (e.g. in Ciudad de México). In a process of developing 3D optimally-accurate finite-difference schemes we encountered a serious problem with accuracy in media with large Vp/Vs ratio. This led us to investigate the very fundamental reasons for the inaccuracy. In order to identify the very basic inherent aspects of the numerical schemes responsible for their behavior with varying Vp/Vs ratio, we restricted to the most basic 2nd-order 2D numerical schemes on a uniform grid in a homogeneous medium. Although basic in the specified sense, the schemes comprise the decisive features for accuracy of wide class of numerical schemes. We investigated 6 numerical schemes: finite-difference_displacement_conventional grid (FD_D_CG) finite-element_Lobatto integration (FE_L) finite-element_Gauss integration (FE_G) finite-difference_displacement-stress_partly-staggered grid (FD_DS_PSG) finite-difference_displacement-stress_staggered grid (FD_DS_SG) finite-difference_velocity-stress_staggered grid (FD_VS_SG) We defined and calculated local errors of the schemes in amplitude and polarization. Because different schemes use different time steps, they need different numbers of time levels to calculate solution for a desired time window. Therefore, we normalized errors for a unit time. The normalization allowed for a direct comparison of errors of different schemes. Extensive numerical calculations for wide ranges of values of the Vp/Vs ratio, spatial sampling ratio, stability ratio, and entire range of directions of propagation with respect to the spatial grid led to interesting and surprising findings. Accuracy of FD_D_CG, FE_L and FE_G strongly depends on Vp/Vs ratio. The schemes are not
NASA Technical Reports Server (NTRS)
Allen, Dale J.; Douglass, Anne R.; Rood, Richard B.; Guthrie, Paul D.
1991-01-01
The application of van Leer's scheme, a monotonic, upstream-biased differencing scheme, to three-dimensional constituent transport calculations is shown. The major disadvantage of the scheme is shown to be a self-limiting diffusion. A major advantage of the scheme is shown to be its ability to maintain constituent correlations. The scheme is adapted for a spherical coordinate system with a hybrid sigma-pressure coordinate in the vertical. Special consideration is given to cross-polar flow. The vertical wind calculation is shown to be extremely sensitive to the method of calculating the divergence. This sensitivity implies that a vertical wind formulation consistent with the transport scheme is essential for accurate transport calculations. The computational savings of the time-splitting method used to solve this equation are shown. Finally, the capabilities of this scheme are illustrated by an ozone transport and chemistry model simulation.
NASA Astrophysics Data System (ADS)
Zhao, Jia; Yang, Xiaofeng; Shen, Jie; Wang, Qi
2016-01-01
We develop a linear, first-order, decoupled, energy-stable scheme for a binary hydrodynamic phase field model of mixtures of nematic liquid crystals and viscous fluids that satisfies an energy dissipation law. We show that the semi-discrete scheme in time satisfies an analogous, semi-discrete energy-dissipation law for any time-step and is therefore unconditionally stable. We then discretize the spatial operators in the scheme by a finite-difference method and implement the fully discrete scheme in a simplified version using CUDA on GPUs in 3 dimensions in space and time. Two numerical examples for rupture of nematic liquid crystal filaments immersed in a viscous fluid matrix are given, illustrating the effectiveness of this new scheme in resolving complex interfacial phenomena in free surface flows of nematic liquid crystals.
Revisit to the THINC scheme: A simple algebraic VOF algorithm
NASA Astrophysics Data System (ADS)
Xiao, Feng; , Satoshi, Ii; Chen, Chungang
2011-08-01
This short note presents an improved multi-dimensional algebraic VOF method to capture moving interfaces. The interface jump in the THINC (tangent of hyperbola for INterface capturing) scheme is adaptively scaled to a proper thickness according to the interface orientation. The numerical accuracy in computing multi-dimensional moving interfaces is significantly improved. Without any geometrical reconstruction, the proposed method is extremely simple and easy to use, and its numerical accuracy is superior to other existing methods of its kind and comparable to the conventional PLIC (piecewise linear interface calculation) type VOF schemes.
Importance biasing scheme implemented in the PRIZMA code
Kandiev, I.Z.; Malyshkin, G.N.
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
NASA Astrophysics Data System (ADS)
Boscheri, Walter; Balsara, Dinshaw S.; Dumbser, Michael
2014-06-01
In this paper we use the genuinely multidimensional HLL Riemann solvers recently developed by Balsara et al. in [13] to construct a new class of computationally efficient high order Lagrangian ADER-WENO one-step ALE finite volume schemes on unstructured triangular meshes. A nonlinear WENO reconstruction operator allows the algorithm to achieve high order of accuracy in space, while high order of accuracy in time is obtained by the use of an ADER time-stepping technique based on a local space-time Galerkin predictor. The multidimensional HLL and HLLC Riemann solvers operate at each vertex of the grid, considering the entire Voronoi neighborhood of each node and allow for larger time steps than conventional one-dimensional Riemann solvers. The results produced by the multidimensional Riemann solver are then used twice in our one-step ALE algorithm: first, as a node solver that assigns a unique velocity vector to each vertex, in order to preserve the continuity of the computational mesh; second, as a building block for genuinely multidimensional numerical flux evaluation that allows the scheme to run with larger time steps compared to conventional finite volume schemes that use classical one-dimensional Riemann solvers in normal direction. The space-time flux integral computation is carried out at the boundaries of each triangular space-time control volume using the Simpson quadrature rule in space and Gauss-Legendre quadrature in time. A rezoning step may be necessary in order to overcome element overlapping or crossing-over. Since our one-step ALE finite volume scheme is based directly on a space-time conservation formulation of the governing PDE system, the remapping stage is not needed, making our algorithm a so-called direct ALE method.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
1999-01-01
This paper presents a modification of the spring analogy scheme which uses axial linear spring stiffness with selective spring stiffening/relaxation. An alternate approach to solving the geometric conservation law is taken which eliminates the need for storage of metric Jacobians at previous time steps. Efficiency and verification are illustrated with several unsteady 2-D airfoil Euler computations. The method is next applied to the computation of the turbulent flow about a 2-D airfoil and wing with two and three- dimensional moving spoiler surfaces, and the results compared with Benchmark Active Controls Technology (BACT) experimental data. The aeroelastic response at low dynamic pressure of an airfoil to a single large scale oscillation of a spoiler surface is computed. This study confirms that it is possible to achieve accurate solutions with a very large time step for aeroelastic problems using the fluid solver and aeroelastic integrator as discussed in this paper.
NASA Technical Reports Server (NTRS)
Abarbanel, S.; Gottlieb, D.
1976-01-01
The paper considers the leap-frog finite-difference method (Kreiss and Oliger, 1973) for systems of partial differential equations of the form du/dt = dF/dx + dG/dy + dH/dz, where d denotes partial derivative, u is a q-component vector and a function of x, y, z, and t, and the vectors F, G, and H are functions of u only. The original leap-frog algorithm is shown to admit a modification that improves on the stability conditions for two and three dimensions by factors of 2 and 2.8, respectively, thereby permitting larger time steps. The scheme for three dimensions is considered optimal in the sense that it combines simple averaging and large time steps.
Central Upwind Scheme for a Compressible Two-Phase Flow Model
Ahmed, Munshoor; Saleem, M. Rehan; Zia, Saqib; Qamar, Shamsul
2015-01-01
In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme. PMID:26039242
Assessment of various convective parametrisation schemes for warm season precipitation foracasts
NASA Astrophysics Data System (ADS)
Mazarakis, Nikos; Kotroni, Vassiliki; Lagouvardos, Konstantinos; Argyriou, Athanassios
2010-05-01
In the frame of the EU/FP6-funded FLASH project the sensitivity of numerical model quantitative precipitation forecasts to the choice of the convective parameterization scheme (CPS) has been examined for twenty selected cases characterized by intense convective activity and widespread precipitation over Greece, during the warm period of 2005 - 2007. The schemes are: Kain - Fritsch, Grell and Betts - Miller - Janjic. The simulated precipitation from the 8-km grid was verified against raingauge measurements and lightning data provided by the ZEUS long-range lightning detection system. The validation against both sources of data showed that among the three CPSs, the more consistent behavior in quantitative precipitation forecasting was obtained by the Kain - Fritsch scheme that provided the best statistical scores. Further various modifications of the Kain-Fritsch (KF) have been examined. The modifications include: (a) the maximization of the convective scheme precipitation efficiency, (b) the change of the convective time step, (c) the force of the convective scheme to produce more/less cloud material, (d) the alteration of the vertical profile of updraft mass flux detrainment.
A semi-implicit gas-kinetic scheme for smooth flows
NASA Astrophysics Data System (ADS)
Wang, Peng; Guo, Zhaoli
2016-08-01
In this paper, a semi-implicit gas-kinetic scheme (SIGKS) is derived for smooth flows based on the Bhatnagar-Gross-Krook (BGK) equation. As a finite-volume scheme, the evolution of the average flow variables in a control volume is under the Eulerian framework, whereas the construction of the numerical flux across the cell interface comes from the Lagrangian perspective. The adoption of the Lagrangian aspect makes the collision and the transport mechanisms intrinsically coupled together in the flux evaluation. As a result, the time step size is independent of the particle collision time and solely determined by the Courant-Friedrichs-Lewy (CFL) condition. An analysis of the reconstructed distribution function at the cell interface shows that the SIGKS can be viewed as a modified Lax-Wendroff type scheme with an additional term. Furthermore, the addition term coming from the implicitness in the reconstruction is expected to be able to enhance the numerical stability of the scheme. A number of numerical tests of smooth flows with low and moderate Mach numbers are performed to benchmark the SIGKS. The results show that the method has second-order spatial accuracy, and can give accurate numerical solutions in comparison with benchmark results. It is also demonstrated that the numerical stability of the proposed scheme is better than the original GKS for smooth flows.
Relaxation schemes for Chebyshev spectral multigrid methods
NASA Technical Reports Server (NTRS)
Kang, Yimin; Fulton, Scott R.
1993-01-01
Two relaxation schemes for Chebyshev spectral multigrid methods are presented for elliptic equations with Dirichlet boundary conditions. The first scheme is a pointwise-preconditioned Richardson relaxation scheme and the second is a line relaxation scheme. The line relaxation scheme provides an efficient and relatively simple approach for solving two-dimensional spectral equations. Numerical examples and comparisons with other methods are given.
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
NASA Astrophysics Data System (ADS)
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results
A well-balanced unified gas-kinetic scheme for multiscale flow transport under gravitational field
NASA Astrophysics Data System (ADS)
Xiao, Tianbai; Cai, Qingdong; Xu, Kun
2017-03-01
The gas dynamics under gravitational field is usually associated with multiple scale nature due to large density variation and a wide variation of local Knudsen number. It is challenging to construct a reliable numerical algorithm to accurately capture the non-equilibrium physical effect in different regimes. In this paper, a well-balanced unified gas-kinetic scheme (UGKS) for all flow regimes under gravitational field will be developed, which can be used for the study of non-equilibrium gravitational gas system. The well-balanced scheme here is defined as a method to evolve an isolated gravitational system under any initial condition to a hydrostatic equilibrium state and to keep such a solution. To preserve such a property is important for a numerical scheme, which can be used for the study of slowly evolving gravitational system, such as the formation of star and galaxy. Based on the Boltzmann model with external forcing term, the UGKS uses an analytic time-dependent (or scale-dependent) solution in the construction of the discretized fluid dynamic equations in the cell size and time step scales, i.e., the so-called direct modeling method. As a result, with the variation of the ratio between the numerical time step and local particle collision time, the UGKS is able to recover flow physics in different regimes and provides a continuous spectrum of gas dynamics. For the first time, the flow physics of a gravitational system in the transition regime can be studied using the UGKS, and the non-equilibrium phenomena in such a gravitational system can be clearly identified. Many numerical examples will be used to validate the scheme. New physical observation, such as the correlation between the gravitational field and the heat flux in the transition regime, will be presented. The current method provides an indispensable tool for the study of non-equilibrium gravitational system.
A resource-efficient adaptive Fourier analyzer
NASA Astrophysics Data System (ADS)
Hajdu, C. F.; Zamantzas, C.; Dabóczi, T.
2016-10-01
We present a resource-efficient frequency adaptation method to complement the Fourier analyzer proposed by Péceli. The novel frequency adaptation scheme is based on the adaptive Fourier analyzer suggested by Nagy. The frequency adaptation method was elaborated with a view to realizing a detector connectivity check on an FPGA in a new beam loss monitoring (BLM) system, currently being developed for beam setup and machine protection of the particle accelerators at the European Organisation for Nuclear Research (CERN). The paper summarizes the Fourier analyzer to the extent relevant to this work and the basic principle of the related frequency adaptation methods. It then outlines the suggested new scheme, presents practical considerations for implementing it and underpins it with an example and the corresponding operational experience.
Overlay caching scheme for overlay networks
NASA Astrophysics Data System (ADS)
Tran, Minh; Tavanapong, Wallapak
2003-01-01
Recent years have seen a tremendous growth of interests in streaming continuous media such as video over the Internet. This would create an enormous increase in the demand on various server and networking resources. To minimize service delays and to reduce loads placed on these resources, we propose an Overlay Caching Scheme (OCS) for overlay networks. OCS utilizes virtual cache structures to coordinate distributed overlay caching nodes along the delivery path between the server and the clients. OCS establishes and adapts these structures dynamically according to clients' locations and request patterns. Compared with existing video caching techniques, OCS offers better performances in terms of average service delays, server load, and network load in most cases in our study.
Breaking and Fixing of an Identity Based Multi-Signcryption Scheme
NASA Astrophysics Data System (ADS)
Selvi, S. Sharmila Deva; Vivek, S. Sree; Rangan, C. Pandu
Signcryption is a cryptographic primitive that provides authentication and confidentiality simultaneously in a single logical step. It is often required that multiple senders have to signcrypt a single message to a certain receiver. Obviously, it is inefficient to signcrypt the messages separately. An efficient alternative is to go for multi-signcryption. The concept of multi-signcryption is similar to that of multi-signatures with the added property - confidentiality. Recently, Jianhong et al. proposed an identity based multi-signcryption scheme. They claimed that their scheme is secure against adaptive chosen ciphertext attack and it is existentially unforgeable. In this paper, we show that their scheme is not secure against chosen plaintext attack and is existentially forgeable, we also provide a fix for the scheme and prove formally that the improved scheme is secure against both adaptive chosen ciphertext attack and existential forgery.
Adaptive Force Control For Compliant Motion Of A Robot
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1995-01-01
Two adaptive control schemes offer robust solutions to problem of stable control of forces of contact between robotic manipulator and objects in its environment. They are called "adaptive admittance control" and "adaptive compliance control." Both schemes involve use of force-and torque sensors that indicate contact forces. These schemes performed well when tested in computational simulations in which they were used to control seven-degree-of-freedom robot arm in executing contact tasks. Choice between admittance or compliance control is dictated by requirements of the application at hand.
Convergence acceleration of implicit schemes in the presence of high aspect ratio grid cells
NASA Technical Reports Server (NTRS)
Buelow, B. E. O.; Venkateswaran, S.; Merkle, C. L.
1993-01-01
The performance of Navier-Stokes codes are influenced by several phenomena. For example, the robustness of the code may be compromised by the lack of grid resolution, by a need for more precise initial conditions or because all or part of the flowfield lies outside the flow regime in which the algorithm converges efficiently. A primary example of the latter effect is the presence of extended low Mach number and/or low Reynolds number regions which cause convergence deterioration of time marching algorithms. Recent research into this problem by several workers including the present authors has largely negated this difficulty through the introduction of time-derivative preconditioning. In the present paper, we employ the preconditioned algorithm to address convergence difficulties arising from sensitivity to grid stretching and high aspect ratio grid cells. Strong grid stretching is particularly characteristic of turbulent flow calculations where the grid must be refined very tightly in the dimension normal to the wall, without a similar refinement in the tangential direction. High aspect ratio grid cells also arise in problems that involve high aspect ratio domains such as combustor coolant channels. In both situations, the high aspect ratio cells can lead to extreme deterioration in convergence. It is the purpose of the present paper to address the reasons for this adverse response to grid stretching and to suggest methods for enhancing convergence under such circumstances. Numerical algorithms typically possess a maximum allowable or optimum value for the time step size, expressed in non-dimensional terms as a CFL number or vonNeumann number (VNN). In the presence of high aspect ratio cells, the smallest dimension of the grid cell controls the time step size causing it to be extremely small, which in turn results in the deterioration of convergence behavior. For explicit schemes, this time step limitation cannot be exceeded without violating stability restrictions
NASA Astrophysics Data System (ADS)
Pan, Liang; Xu, Kun
2016-08-01
In this paper, for the first time a third-order compact gas-kinetic scheme is proposed on unstructured meshes for the compressible viscous flow computations. The possibility to design such a third-order compact scheme is due to the high-order gas evolution model, where a time-dependent gas distribution function at cell interface not only provides the fluxes across a cell interface, but also presents a time accurate solution for flow variables at cell interface. As a result, both cell averaged and cell interface flow variables can be used for the initial data reconstruction at the beginning of next time step. A weighted least-square procedure has been used for the initial reconstruction. Therefore, a compact third-order gas-kinetic scheme with the involvement of neighboring cells only can be developed on unstructured meshes. In comparison with other conventional high-order schemes, the current method avoids the Gaussian point integration for numerical fluxes along a cell interface and the multi-stage Runge-Kutta method for temporal accuracy. The third-order compact scheme is numerically stable under CFL condition CFL ≈ 0.5. Due to its multidimensional gas-kinetic formulation and the coupling of inviscid and viscous terms, even with unstructured meshes, the boundary layer solution and vortex structure can be accurately captured by the current scheme. At the same time, the compact scheme can capture strong shocks as well.
Adaptive hybrid control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
Simple methods for the design of adaptive force and position controllers for robot manipulators within the hybrid control architecuture is presented. The force controller is composed of an adaptive PID feedback controller, an auxiliary signal and a force feedforward term, and it achieves tracking of desired force setpoints in the constraint directions. The position controller consists of adaptive feedback and feedforward controllers and an auxiliary signal, and it accomplishes tracking of desired position trajectories in the free directions. The controllers are capable of compensating for dynamic cross-couplings that exist between the position and force control loops in the hybrid control architecture. The adaptive controllers do not require knowledge of the complex dynamic model or parameter values of the manipulator or the environment. The proposed control schemes are computationally fast and suitable for implementation in on-line control with high sampling rates.
Uniformly high order accurate essentially non-oscillatory schemes 3
NASA Technical Reports Server (NTRS)
Harten, A.; Engquist, B.; Osher, S.; Chakravarthy, S. R.
1986-01-01
In this paper (a third in a series) the construction and the analysis of essentially non-oscillatory shock capturing methods for the approximation of hyperbolic conservation laws are presented. Also presented is a hierarchy of high order accurate schemes which generalizes Godunov's scheme and its second order accurate MUSCL extension to arbitrary order of accuracy. The design involves an essentially non-oscillatory piecewise polynomial reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell. The reconstruction algorithm is derived from a new interpolation technique that when applied to piecewise smooth data gives high-order accuracy whenever the function is smooth but avoids a Gibbs phenomenon at discontinuities. Unlike standard finite difference methods this procedure uses an adaptive stencil of grid points and consequently the resulting schemes are highly nonlinear.
Modulation recognition for variable-rate QAM schemes
NASA Astrophysics Data System (ADS)
Lin, Yu-Chuan; Kuo, C.-C. Jay
1995-12-01
For some applications such as mobile communication or transmission of multimedia data, it is desirable to use modems with variable constellation schemes to adapt to the fast changing channel to accommodate a wide range of data transmission rates, bit error rates and data types. In this research we are interested in designing receivers which can identify the QAM constellation schemes from the received signal with an unknown reference phase. The problem is modeled as an M-ary hypothesis test with each hypothesis corresponding to one of the M possible constellation schemes. We show the performance of BPSK, QPSK, 8-PSK, 16-PSK, V.29-7200 bps, V.29-9600 bps, 16-QAM, 32-QAM, 64-QAM, 128-QAM, and 256-QAM classifiers in numerical experiments.
Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique
2011-05-01
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
NASA Astrophysics Data System (ADS)
Kinzig, Ann P.
2015-03-01
This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.
High resolution schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Harten, A.
1983-01-01
A class of new explicit second order accurate finite difference schemes for the computation of weak solutions of hyperbolic conservation laws is presented. These highly nonlinear schemes are obtained by applying a nonoscillatory first order accurate scheme to an appropriately modified flux function. The so-derived second order accurate schemes achieve high resolution while preserving the robustness of the original nonoscillatory first order accurate scheme. Numerical experiments are presented to demonstrate the performance of these new schemes.
The fundamentals of adaptive grid movement
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.
1990-01-01
Basic grid point movement schemes are studied. The schemes are referred to as adaptive grids. Weight functions and equidistribution in one dimension are treated. The specification of coefficients in the linear weight, attraction to a given grid or a curve, and evolutionary forces are considered. Curve by curve and finite volume methods are described. The temporal coupling of partial differential equations solvers and grid generators was discussed.
NASA Astrophysics Data System (ADS)
Peters, Andre; Nehls, Thomas; Wessolek, Gerd
2016-06-01
Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.
Optimal probabilistic dense coding schemes
NASA Astrophysics Data System (ADS)
Kögler, Roger A.; Neves, Leonardo
2017-04-01
Dense coding with non-maximally entangled states has been investigated in many different scenarios. We revisit this problem for protocols adopting the standard encoding scheme. In this case, the set of possible classical messages cannot be perfectly distinguished due to the non-orthogonality of the quantum states carrying them. So far, the decoding process has been approached in two ways: (i) The message is always inferred, but with an associated (minimum) error; (ii) the message is inferred without error, but only sometimes; in case of failure, nothing else is done. Here, we generalize on these approaches and propose novel optimal probabilistic decoding schemes. The first uses quantum-state separation to increase the distinguishability of the messages with an optimal success probability. This scheme is shown to include (i) and (ii) as special cases and continuously interpolate between them, which enables the decoder to trade-off between the level of confidence desired to identify the received messages and the success probability for doing so. The second scheme, called multistage decoding, applies only for qudits ( d-level quantum systems with d>2) and consists of further attempts in the state identification process in case of failure in the first one. We show that this scheme is advantageous over (ii) as it increases the mutual information between the sender and receiver.
Adaptive clinical trial designs in oncology
Zang, Yong; Lee, J. Jack
2015-01-01
Adaptive designs have become popular in clinical trial and drug development. Unlike traditional trial designs, adaptive designs use accumulating data to modify the ongoing trial without undermining the integrity and validity of the trial. As a result, adaptive designs provide a flexible and effective way to conduct clinical trials. The designs have potential advantages of improving the study power, reducing sample size and total cost, treating more patients with more effective treatments, identifying efficacious drugs for specific subgroups of patients based on their biomarker profiles, and shortening the time for drug development. In this article, we review adaptive designs commonly used in clinical trials and investigate several aspects of the designs, including the dose-finding scheme, interim analysis, adaptive randomization, biomarker-guided randomization, and seamless designs. For illustration, we provide examples of real trials conducted with adaptive designs. We also discuss practical issues from the perspective of using adaptive designs in oncology trials. PMID:25811018
Nakano, Hidehiro; Utani, Akihide; Miyauchi, Arata; Yamamoto, Hisao
2011-04-19
This paper studies chaos-based data gathering scheme in multiple sink wireless sensor networks. In the proposed scheme, each wireless sensor node has a simple chaotic oscillator. The oscillators generate spike signals with chaotic interspike intervals, and are impulsively coupled by the signals via wireless communication. Each wireless sensor node transmits and receives sensor information only in the timing of the couplings. The proposed scheme can exhibit various chaos synchronous phenomena and their breakdown phenomena, and can effectively gather sensor information with the significantly small number of transmissions and receptions compared with the conventional scheme. Also, the proposed scheme can flexibly adapt various wireless sensor networks not only with a single sink node but also with multiple sink nodes. This paper introduces our previous works. Through simulation experiments, we show effectiveness of the proposed scheme and discuss its development potential.
An Underfrequency Load Shedding Scheme with Minimal Knowledge of System Parameters
NASA Astrophysics Data System (ADS)
Joe, Athbel; Krishna, S.
2015-02-01
Underfrequency load shedding (UFLS) is a common practice to protect a power system during large generation deficit. The adaptive UFLS schemes proposed in the literature have the drawbacks such as requirement of transmission of local frequency measurements to a central location and knowledge of system parameters, such as inertia constant H and load damping constant D. In this paper, a UFLS scheme that uses only the local frequency measurements is proposed. The proposed method does not require prior knowledge of H and D. The scheme is developed for power systems with and without spinning reserve. The proposed scheme requires frequency measurements free from the oscillations at the swing mode frequencies. Use of an elliptic low pass filter to remove these oscillations is proposed. The scheme is tested on a 2 generator system and the 10 generator New England system. Performance of the scheme with power system stabilizer is also studied.
Fine granularity adaptive multireceiver video streaming
NASA Astrophysics Data System (ADS)
Eide, Viktor S. Wold; Eliassen, Frank; Michaelsen, Jørgen Andreas; Jensen, Frank
2007-01-01
Effcient delivery of video data over computer networks has been studied extensively for decades. Still, multi-receiver video delivery is challenging, due to heterogeneity and variability in network availability, end node capabilities, and receiver preferences. Our earlier work has shown that content-based networking is a viable technology for fine granularity multireceiver video streaming. By exploiting this technology, we have demonstrated that each video receiver is provided with fine grained and independent selectivity along the different video quality dimensions region of interest, signal to noise ratio for the luminance and the chrominance planes, and temporal resolution. Here we propose a novel adaptation scheme combining such video streaming with state-of-the-art techniques from the field of adaptation to provide receiver-driven multi-dimensional adaptive video streaming. The scheme allows each client to individually adapt the quality of the received video according to its currently available resources and own preferences. The proposed adaptation scheme is validated experimentally. The results demonstrate adaptation to variations in available bandwidth and CPU resources roughly over two orders of magnitude and that fine grained adaptation is feasible given radically different user preferences.
One-qubit fingerprinting schemes
Beaudrap, J. Niel de
2004-02-01
Fingerprinting is a technique in communication complexity in which two parties (Alice and Bob) with large data sets send short messages to a third party (a referee), who attempts to compute some function of the larger data sets. For the equality function, the referee attempts to determine whether Alice's data and Bob's data are the same. In this paper, we consider the extreme scenario of performing fingerprinting where Alice and Bob both send either one bit (classically) or one qubit (in the quantum regime) messages to the referee for the equality problem. Restrictive bounds are demonstrated for the error probability of one-bit fingerprinting schemes, and show that it is easy to construct one-qubit fingerprinting schemes which can outperform any one-bit fingerprinting scheme. The author hopes that this analysis will provide results useful for performing physical experiments, which may help to advance implementations for more general quantum communication protocols.
PATTERN RECOGNITION AND CLASSIFICATION USING ADAPTIVE LINEAR NEURON DEVICES
adaption by an adaptive linear neuron ( Adaline ), as applied to the pattern recognition and classification problem; (2) Four possible iterative adaption...schemes which may be used to train as Adaline ; (3) Use of Multiple Adalines (Madaline) and two logic layers to increase system capability; and (4) Use...of Adaline in the practical fields of Speech Recognition, Weather Forecasting and Adaptive Control Systems and the possible use of Madaline in the Character Recognition field.
NASA Astrophysics Data System (ADS)
Yu, Rixin; Yu, Jiangfei; Bai, Xue-Song
2012-06-01
We present an improved numerical scheme for numerical simulations of low Mach number turbulent reacting flows with detailed chemistry and transport. The method is based on a semi-implicit operator-splitting scheme with a stiff solver for integration of the chemical kinetic rates, developed by Knio et al. [O.M. Knio, H.N. Najm, P.S. Wyckoff, A semi-implicit numerical scheme for reacting flow II. Stiff, operator-split formulation, Journal of Computational Physics 154 (2) (1999) 428-467]. Using the material derivative form of continuity equation, we enhance the scheme to allow for large density ratio in the flow field. The scheme is developed for direct numerical simulation of turbulent reacting flow by employing high-order discretization for the spatial terms. The accuracy of the scheme in space and time is verified by examining the grid/time-step dependency on one-dimensional benchmark cases: a freely propagating premixed flame in an open environment and in an enclosure related to spark-ignition engines. The scheme is then examined in simulations of a two-dimensional laminar flame/vortex-pair interaction. Furthermore, we apply the scheme to direct numerical simulation of a homogeneous charge compression ignition (HCCI) process in an enclosure studied previously in the literature. Satisfactory agreement is found in terms of the overall ignition behavior, local reaction zone structures and statistical quantities. Finally, the scheme is used to study the development of intrinsic flame instabilities in a lean H2/air premixed flame, where it is shown that the spatial and temporary accuracies of numerical schemes can have great impact on the prediction of the sensitive nonlinear evolution process of flame instability.
An Advanced Leakage Scheme for Neutrino Treatment in Astrophysical Simulations
NASA Astrophysics Data System (ADS)
Perego, A.; Cabezón, R. M.; Käppeli, R.
2016-04-01
We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.
AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS
Perego, A.; Cabezón, R. M.; Käppeli, R.
2016-04-15
We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.
ERIC Educational Resources Information Center
Exceptional Parent, 1987
1987-01-01
Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)
Design and simulation of advanced fault tolerant flight control schemes
NASA Astrophysics Data System (ADS)
Gururajan, Srikanth
This research effort describes the design and simulation of a distributed Neural Network (NN) based fault tolerant flight control scheme and the interface of the scheme within a simulation/visualization environment. The goal of the fault tolerant flight control scheme is to recover an aircraft from failures to its sensors or actuators. A commercially available simulation package, Aviator Visual Design Simulator (AVDS), was used for the purpose of simulation and visualization of the aircraft dynamics and the performance of the control schemes. For the purpose of the sensor failure detection, identification and accommodation (SFDIA) task, it is assumed that the pitch, roll and yaw rate gyros onboard are without physical redundancy. The task is accomplished through the use of a Main Neural Network (MNN) and a set of three De-Centralized Neural Networks (DNNs), providing analytical redundancy for the pitch, roll and yaw gyros. The purpose of the MNN is to detect a sensor failure while the purpose of the DNNs is to identify the failed sensor and then to provide failure accommodation. The actuator failure detection, identification and accommodation (AFDIA) scheme also features the MNN, for detection of actuator failures, along with three Neural Network Controllers (NNCs) for providing the compensating control surface deflections to neutralize the failure induced pitching, rolling and yawing moments. All NNs continue to train on-line, in addition to an offline trained baseline network structure, using the Extended Back-Propagation Algorithm (EBPA), with the flight data provided by the AVDS simulation package. The above mentioned adaptive flight control schemes have been traditionally implemented sequentially on a single computer. This research addresses the implementation of these fault tolerant flight control schemes on parallel and distributed computer architectures, using Berkeley Software Distribution (BSD) sockets and Message Passing Interface (MPI) for inter
Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations
Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul
2015-01-01
The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067
Rescaling of the Roe scheme in low Mach-number flow regions
NASA Astrophysics Data System (ADS)
Boniface, Jean-Christophe
2017-01-01
A rescaled matrix-valued dissipation is reformulated for the Roe scheme in low Mach-number flow regions from a well known family of local low-speed preconditioners popularized by Turkel. The rescaling is obtained explicitly by suppressing the pre-multiplication of the preconditioner with the time derivative and by deriving the full set of eigenspaces of the Roe-Turkel matrix dissipation. This formulation preserves the time consistency and does not require to reformulate the boundary conditions based on the characteristic theory. The dissipation matrix achieves by construction the proper scaling in low-speed flow regions and returns the original Roe scheme at the sonic line. We find that all eigenvalues are nonnegative in the subsonic regime. However, it becomes necessary to formulate a stringent stability condition to the explicit scheme in the low-speed flow regions based on the spectral radius of the rescaled matrix dissipation. With the large disparity of the eigenvalues in the dissipation matrix, this formulation raises a two-timescale problem for the acoustic waves, which is circumvented for a steady-state iterative procedure by the development of a robust implicit characteristic matrix time-stepping scheme. The behaviour of the modified eigenvalues in the incompressible limit and at the sonic line also suggests applying the entropy correction carefully, especially for complex non-linear flows.
Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations.
Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul
2015-01-01
The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems.
Kahan, W.; Li, Ren-Chang
1997-07-01
An unconventional numerical method for solving a restrictive and yet often-encountered class of ordinary differential equations is proposed. The method has a crucial, what we call reflexive, property and requires solving one linear system per time-step, but is second-order accurate. A systematical and easily implementable scheme is proposed to enhance the computational efficiency of such methods whenever needed. Applications are reported on how the idea can be applied to solve the Korteweg-de Vries Equation discretized in space.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Extension of Low Dissipative High Order Hydrodynamics Schemes for MHD Equations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, Bjoern; Mansour, Nagi (Technical Monitor)
2002-01-01
The objective of this paper is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD (magnetohydrodynamic) equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls to limit the amount and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvi-linear grids. The three features of the present MHD scheme over existing schemes in the open literature are as follows. First, the scheme is constructed for long-time integrations of shock/turbulence/combustion magnetized flows. Available schemes are too diffusive for long-time integrations and/or turbulence/combustion problems. Second, unlike existing schemes for the conservative MHD equations which suffer from ill-conditioned eigen-decompositions, the present scheme makes use of a well-conditioned eigen-decomposition to solve the conservative form of the MHD equations. This is due to, partly. the fact that the divergence of the magnetic field condition is a different type of constraint from its incompressible Navier-Stokes cousin. Third, a new approach to minimize the numerical error of the divergence free magnetic condition for high order scheme is introduced.
On symmetric and upwind TVD schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.
1985-01-01
A class of explicit and implicit total variation diminishing (TVD) schemes for the compressible Euler and Navier-Stokes equations was developed. They do not generate spurious oscillations across shocks and contact discontinuities. In general, shocks can be captured within 1 to 2 grid points. For the inviscid case, these schemes are divided into upwind TVD schemes and symmetric (nonupwind) TVD schemes. The upwind TVD scheme is based on the second-order TVD scheme. The symmetric TVD scheme is a generalization of Roe's and Davis' TVD Lax-Wendroff scheme. The performance of these schemes on some viscous and inviscid airfoil steady-state calculations is investigated. The symmetric and upwind TVD schemes are compared.
Gas-kinetic BGK Schemes for 3D Viscous Flow
NASA Astrophysics Data System (ADS)
Jiang, Jin; Qian, Yuehong
2009-11-01
Gas-kinetic BGK scheme developed as an Euler and Navier-Stokes solver is dated back to the early 1990s. There are now numerous literatures on the method. Here we focused on extending this approach to 3D viscous flow. Firstly, to validate the code, some test cases are carried out, including 1D Sod problem, interaction between shock and boundary layer. Then to improve its computational efficiency, two main convergence acceleration techniques, which are local time-stepping and implicit residual smoothing, have adopted and tested. The results indicate that the speed-up to convergence steady state is significant. The last is to incorporate turbulence model into current code with the increasing Reynolds number. As a proof of accuracy, the transonic flow over ONERA M6 wing and pressure distributions at various selected span-wise directions have been tested. The results are in good agreement with experimental data, which implies the extension to turbulent flow is very encouraging and of good help for further development.
Adaptive Quantization Parameter Cascading in HEVC Hierarchical Coding.
Zhao, Tiesong; Wang, Zhou; Chen, Chang Wen
2016-04-20
The state-of-the-art High Efficiency Video Coding (HEVC) standard adopts a hierarchical coding structure to improve its coding efficiency. This allows for the Quantization Parameter Cascading (QPC) scheme that assigns Quantization Parameters (Qps) to different hierarchical layers in order to further improve the Rate-Distortion (RD) performance. However, only static QPC schemes have been suggested in HEVC test model (HM), which are unable to fully explore the potentials of QPC. In this paper, we propose an adaptive QPC scheme for HEVC hierarchical structure to code natural video sequences characterized by diversified textures, motions and encoder configurations. We formulate the adaptive QPC scheme as a non-linear programming problem and solve it in a scientifically sound way with a manageable low computational overhead. The proposed model addresses a generic Qp assignment problem of video coding. Therefore, it also applies to Group-Of-Picture (GOP)- level, frame-level and Coding Unit (CU)-level Qp assignments. Comprehensive experiments have demonstrated the proposed QPC scheme is able to adapt quickly to different video contents and coding configurations while achieving noticeable RD performance enhancement over all static and adaptive QPC schemes under comparison as well as HEVC default frame-level rate control. We have also made valuable observations on the distributions of adaptive QPC sets in videos of different types of contents, which provide useful insights on how to further improve static QPC schemes.
Adaptive Quantization Parameter Cascading in HEVC Hierarchical Coding.
Zhao, Tiesong; Wang, Zhou; Chen, Chang Wen
2016-07-01
The state-of-the-art High Efficiency Video Coding (HEVC) standard adopts a hierarchical coding structure to improve its coding efficiency. This allows for the quantization parameter cascading (QPC) scheme that assigns quantization parameters (Qps) to different hierarchical layers in order to further improve the rate-distortion (RD) performance. However, only static QPC schemes have been suggested in HEVC test model, which are unable to fully explore the potentials of QPC. In this paper, we propose an adaptive QPC scheme for an HEVC hierarchical structure to code natural video sequences characterized by diversified textures, motions, and encoder configurations. We formulate the adaptive QPC scheme as a non-linear programming problem and solve it in a scientifically sound way with a manageable low computational overhead. The proposed model addresses a generic Qp assignment problem of video coding. Therefore, it also applies to group-of-picture-level, frame-level and coding unit-level Qp assignments. Comprehensive experiments have demonstrated that the proposed QPC scheme is able to adapt quickly to different video contents and coding configurations while achieving noticeable RD performance enhancement over all static and adaptive QPC schemes under comparison as well as HEVC default frame-level rate control. We have also made valuable observations on the distributions of adaptive QPC sets in the videos of different types of contents, which provide useful insights on how to further improve static QPC schemes.
Upwind Compact Finite Difference Schemes
NASA Astrophysics Data System (ADS)
Christie, I.
1985-07-01
It was shown by Ciment, Leventhal, and Weinberg ( J. Comput. Phys.28 (1978), 135) that the standard compact finite difference scheme may break down in convection dominated problems. An upwinding of the method, which maintains the fourth order accuracy, is suggested and favorable numerical results are found for a number of test problems.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
grim: A Flexible, Conservative Scheme for Relativistic Fluid Theories
NASA Astrophysics Data System (ADS)
Chandra, Mani; Foucart, Francois; Gammie, Charles F.
2017-03-01
Hot, diffuse, relativistic plasmas such as sub-Eddington black-hole accretion flows are expected to be collisionless, yet are commonly modeled as a fluid using ideal general relativistic magnetohydrodynamics (GRMHD). Dissipative effects such as heat conduction and viscosity can be important in a collisionless plasma and will potentially alter the dynamics and radiative properties of the flow from that in ideal fluid models; we refer to models that include these processes as Extended GRMHD. Here we describe a new conservative code, grim, that enables all of the above and additional physics to be efficiently incorporated. grim combines time evolution and primitive variable inversion needed for conservative schemes into a single step using an algorithm that only requires the residuals of the governing equations as inputs. This algorithm enables the code to be physics agnostic as well as flexibility regarding time-stepping schemes. grim runs on CPUs, as well as on GPUs, using the same code. We formulate a performance model and use it to show that our implementation runs optimally on both architectures. grim correctly captures classical GRMHD test problems as well as a new suite of linear and nonlinear test problems with anisotropic conduction and viscosity in special and general relativity. As tests and example applications, we resolve the shock substructure due to the presence of dissipation, and report on relativistic versions of the magneto-thermal instability and heat flux driven buoyancy instability, which arise due to anisotropic heat conduction, and of the firehose instability, which occurs due to anisotropic pressure (i.e., viscosity). Finally, we show an example integration of an accretion flow around a Kerr black hole, using Extended GRMHD.
NASA Astrophysics Data System (ADS)
Mazarakis, N.; Kotroni, V.; Lagouvardos, K.; Argiriou, A.
2009-09-01
The sensitivity of quantitative precipitation forecasts to various modifications of the Kain-Fritsch (KF) convective parameterization scheme (CPS) is examined for twenty selected cases characterized by intense convective activity and widespread precipitation over Greece, during the warm period of years 2005-2007. Namely, the study is conducted using MM5 model. The modifications to the KF CPS, each designed to test model sensitivity to the convective scheme formulation, are discussed. The modifications include: (a) the maximization of the convective scheme precipitation efficiency, (b) the change of the convective time step, (c) the force of the convective scheme to produce more/less cloud material, (d) the alteration of the vertical profile of updraft mass flux detrainment. One hundred forty numerical simulations have been carried out on two nested domains, with horizontal grid increments of 24 and 8 km respectively. The simulated precipitation from the 8-km grid is verified against raingauge measurements. Model results using the aforementioned modifications of the convective scheme does not show significant improvements in 6-h precipitations totals compared to simulations generated using the unmodified convective scheme. In general, skill scores among the cases and the precipitation thresholds vary widely.
NASA Astrophysics Data System (ADS)
Sørensen, B.; Kaas, E.; Korsholm, U. S.
2012-11-01
In this paper a new advection scheme for the online coupled chemical-weather prediction model Enviro-HIRLAM is presented. The new scheme is based on the locally mass conserving semi-Lagrangian method (LMCSL), where the original two-dimensional scheme has been extended to a fully three-dimensional version. This means that the three-dimensional semi-implicit semi-Lagrangian scheme which is currently used in Enviro-HIRLAM, is largely unchanged. The HIRLAM model is a computationally efficient hydrostatic operational short term numerical weather prediction model, which is used as the base for the online integrated Enviro-HIRLAM. The new scheme is shown to be efficient, mass conserving, and shape-preserving while only requiring minor alterations to the original code. It still retains the stability at long time steps, which the semi-Lagrangian schemes are known for, while handling the emissions of chemical species accurately. Several mass conserving filters have been tested to assess the optimal balance of accuracy vs. efficiency.
NASA Astrophysics Data System (ADS)
Sørensen, B.; Kaas, E.; Korsholm, U. S.
2013-07-01
In this paper a new advection scheme for the online coupled chemical-weather prediction model Enviro-HIRLAM is presented. The new scheme is based on the locally mass-conserving semi-Lagrangian method (LMCSL), where the original two-dimensional scheme has been extended to a fully three-dimensional version. This means that the three-dimensional semi-implicit semi-Lagrangian scheme which is currently used in Enviro-HIRLAM is largely unchanged. The HIRLAM model is a computationally efficient hydrostatic operational short-term numerical weather prediction model, which is used as the base for the online integrated Enviro-HIRLAM. The new scheme is shown to be efficient, mass conserving, and shape preserving, while only requiring minor alterations to the original code. It still retains the stability at long time steps, which the semi-Lagrangian schemes are known for, while handling the emissions of chemical species accurately. Several mass-conserving filters have been tested to assess the optimal balance of accuracy vs. efficiency.
A New Improving Quantum Secret Sharing Scheme
NASA Astrophysics Data System (ADS)
Xu, Ting-Ting; Li, Zhi-Hui; Bai, Chen-Ming; Ma, Min
2017-01-01
An improving quantum secret sharing scheme (IQSS scheme) was introduced by Nascimento et al. (Phys. Rev. A 64, 042311 (2001)), which was analyzed by the improved quantum access structure. In this paper, we propose a new improving quantum secret sharing scheme, and more quantum access structures can be realized by this scheme than the previous one. For example, we prove that any threshold and hypercycle quantum access structures can be realized by the new scheme.
A New Improving Quantum Secret Sharing Scheme
NASA Astrophysics Data System (ADS)
Xu, Ting-Ting; Li, Zhi-Hui; Bai, Chen-Ming; Ma, Min
2017-04-01
An improving quantum secret sharing scheme (IQSS scheme) was introduced by Nascimento et al. (Phys. Rev. A 64, 042311 (2001)), which was analyzed by the improved quantum access structure. In this paper, we propose a new improving quantum secret sharing scheme, and more quantum access structures can be realized by this scheme than the previous one. For example, we prove that any threshold and hypercycle quantum access structures can be realized by the new scheme.
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
Sensitivity of Age-of-Air Calculations to the Choice of Advection Scheme
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Hemler, Richard S.; Mahlman, Jerry D.; Bruhwiler, Lori; Takacs, Lawrence L.
2000-01-01
-Lagrangian dynamics employed in the MACCM3. This type of dynamical core (employed with a 60-min time step) is likely to reduce SLT's interpolation errors that are compounded by the short-term variability characteristic of the explicit centered-difference dynamics employed in the SKYHI model (time step of 3 min). In the extreme case of a very slowly varying circulation, the choice of advection scheme has no effect on two-dimensional (latitude-height) age-of-air calculations, owing to the smooth nature of the transport circulation in 2D models. These results suggest that nondiffusive schemes may be the preferred choice for multiyear simulations of tracers not overly sensitive to the requirement of monotonicity (this category includes many greenhouse gases). At the same time, age-of-air calculations offer a simple quantitative diagnostic of a scheme's long-term diffusive properties and may help in the evaluation of dynamical cores in multiyear integrations. On the other hand, the sensitivity of the computed ages to the model numerics calls for caution in using age of air as a diagnostic of a GCM's large-scale circulation field.
NASA Astrophysics Data System (ADS)
Rosen, A. L.; Krumholz, M. R.; Oishi, J. S.; Lee, A. T.; Klein, R. I.
2017-02-01
We present a highly-parallel multi-frequency hybrid radiation hydrodynamics algorithm that combines a spatially-adaptive long characteristics method for the radiation field from point sources with a moment method that handles the diffuse radiation field produced by a volume-filling fluid. Our Hybrid Adaptive Ray-Moment Method (HARM2) operates on patch-based adaptive grids, is compatible with asynchronous time stepping, and works with any moment method. In comparison to previous long characteristics methods, we have greatly improved the parallel performance of the adaptive long-characteristics method by developing a new completely asynchronous and non-blocking communication algorithm. As a result of this improvement, our implementation achieves near-perfect scaling up to O (103) processors on distributed memory machines. We present a series of tests to demonstrate the accuracy and performance of the method.
Adaptive multiresolution semi-Lagrangian discontinuous Galerkin methods for the Vlasov equations
NASA Astrophysics Data System (ADS)
Besse, N.; Deriaz, E.; Madaule, É.
2017-03-01
We develop adaptive numerical schemes for the Vlasov equation by combining discontinuous Galerkin discretisation, multiresolution analysis and semi-Lagrangian time integration. We implement a tree based structure in order to achieve adaptivity. Both multi-wavelets and discontinuous Galerkin rely on a local polynomial basis. The schemes are tested and validated using Vlasov-Poisson equations for plasma physics and astrophysics.
Resource Allocation Scheme in MIMO-OFDMA System for User's Different Data Throughput Requirements
NASA Astrophysics Data System (ADS)
Sann Maw, Maung; Sasase, Iwao
In the subcarrier and power allocation schemes in Multi-Input Multi-Output and Orthogonal Frequency Division Multiple Access (MIMO-OFDMA) systems, only equal fairness among users has been considered and no scheme for proportional data rate fairness has been considered. In this paper, a subcarrier, bit and power allocation scheme is proposed to maximize the total throughput under the constraints of total power and proportional data rate fairness among users. In the proposed scheme, joint subchannel allocation and adaptive bit loading is firstly performed by using singular value decomposition (SVD) of channel matrix under the constraint of users' data throughput requirements, and then adaptive power loading is applied. Simulation results show that effective performance of the system has been improved as well as each throughput is proportionally distributed among users in MIMO-OFDMA systems.
Adaptive Sampling in Hierarchical Simulation
Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R
2007-07-09
We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.
Energy preservation and entropy in Lagrangian space- and time-staggered hydrodynamic schemes
NASA Astrophysics Data System (ADS)
Llor, Antoine; Claisse, Alexandra; Fochesato, Christophe
2016-03-01
Usual space- and time-staggered (STS) ;leap-frog; Lagrangian hydrodynamic schemes-such as von Neumann-Richtmyer's (1950), Wilkins' (1964), and their variants-are widely used for their simplicity and robustness despite their known lack of exact energy conservation. Since the seminal work of Trulio and Trigger (1950) and despite the later corrections of Burton (1991), it is generally accepted that these schemes cannot be modified to exactly conserve energy while retaining all of the following properties: STS stencil with velocities half-time centered with respect to positions, explicit second-order algorithm (locally implicit for internal energy), and definite positive kinetic energy. It is shown here that it is actually possible to modify the usual STS hydrodynamic schemes in order to be exactly energy-preserving, regardless of the evenness of their time centering assumptions and retaining their simple algorithmic structure. Burton's conservative scheme (1991) is found as a special case of time centering which cancels the term here designated as ;incompatible displacements residue.; In contrast, von Neumann-Richtmyer's original centering can be preserved provided this residue is properly corrected. These two schemes are the only special cases able to capture isentropic flow with a third order entropy error, instead of second order in general. The momentum equation is presently obtained by application of a variational principle to an action integral discretized in both space and time. The internal energy equation follows from the discrete conservation of total energy. Entropy production by artificial dissipation is obtained to second order by a prediction-correction step on the momentum equation. The overall structure of the equations (explicit for momentum, locally implicit for internal energy) remains identical to that of usual STS ;leap-frog; schemes, though complementary terms are required to correct the effects of time-step changes and artificial viscosity
Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis
NASA Astrophysics Data System (ADS)
Yue, Zhihua
2005-11-01
The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Selecting registration schemes in case of interstitial lung disease follow-up in CT
Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra; Kalogeropoulou, Christina; Pratikakis, Ioannis; Costaridou, Lena
2015-08-15
Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the
Subranging scheme for SQUID sensors
NASA Technical Reports Server (NTRS)
Penanen, Konstantin I. (Inventor)
2008-01-01
A readout scheme for measuring the output from a SQUID-based sensor-array using an improved subranging architecture that includes multiple resolution channels (such as a coarse resolution channel and a fine resolution channel). The scheme employs a flux sensing circuit with a sensing coil connected in series to multiple input coils, each input coil being coupled to a corresponding SQUID detection circuit having a high-resolution SQUID device with independent linearizing feedback. A two-resolution configuration (course and fine) is illustrated with a primary SQUID detection circuit for generating a fine readout, and a secondary SQUID detection circuit for generating a course readout, both having feedback current coupled to the respective SQUID devices via feedback/modulation coils. The primary and secondary SQUID detection circuits function and derive independent feedback. Thus, the SQUID devices may be monitored independently of each other (and read simultaneously) to dramatically increase slew rates and dynamic range.
[PICS: pharmaceutical inspection cooperation scheme].
Morénas, J
2009-01-01
The pharmaceutical inspection cooperation scheme (PICS) is a structure containing 34 participating authorities located worldwide (October 2008). It has been created in 1995 on the basis of the pharmaceutical inspection convention (PIC) settled by the European free trade association (EFTA) in1970. This scheme has different goals as to be an international recognised body in the field of good manufacturing practices (GMP), for training inspectors (by the way of an annual seminar and experts circles related notably to active pharmaceutical ingredients [API], quality risk management, computerized systems, useful for the writing of inspection's aide-memoires). PICS is also leading to high standards for GMP inspectorates (through regular crossed audits) and being a room for exchanges on technical matters between inspectors but also between inspectors and pharmaceutical industry.
A biometric signcryption scheme without bilinear pairing
NASA Astrophysics Data System (ADS)
Wang, Mingwen; Ren, Zhiyuan; Cai, Jun; Zheng, Wentao
2013-03-01
How to apply the entropy in biometrics into the encryption and remote authentication schemes to simplify the management of keys is a hot research area. Utilizing Dodis's fuzzy extractor method and Liu's original signcryption scheme, a biometric identity based signcryption scheme is proposed in this paper. The proposed scheme is more efficient than most of the previous proposed biometric signcryption schemes for that it does not need bilinear pairing computation and modular exponentiation computation which is time consuming largely. The analysis results show that under the CDH and DL hard problem assumption, the proposed scheme has the features of confidentiality and unforgeability simultaneously.
A Decentralized Adaptive Approach to Fault Tolerant Flight Control
NASA Technical Reports Server (NTRS)
Wu, N. Eva; Nikulin, Vladimir; Heimes, Felix; Shormin, Victor
2000-01-01
This paper briefly reports some results of our study on the application of a decentralized adaptive control approach to a 6 DOF nonlinear aircraft model. The simulation results showed the potential of using this approach to achieve fault tolerant control. Based on this observation and some analysis, the paper proposes a multiple channel adaptive control scheme that makes use of the functionally redundant actuating and sensing capabilities in the model, and explains how to implement the scheme to tolerate actuator and sensor failures. The conditions, under which the scheme is a