Science.gov

Sample records for adaptive time-stepping scheme

  1. Non-iterative adaptive time-stepping scheme with temporal truncation error control for simulating variable-density flow

    NASA Astrophysics Data System (ADS)

    Hirthe, Eugenia M.; Graf, Thomas

    2012-12-01

    The automatic non-iterative second-order time-stepping scheme based on the temporal truncation error proposed by Kavetski et al. [Kavetski D, Binning P, Sloan SW. Non-iterative time-stepping schemes with adaptive truncation error control for the solution of Richards equation. Water Resour Res 2002;38(10):1211, http://dx.doi.org/10.1029/2001WR000720.] is implemented into the code of the HydroGeoSphere model. This time-stepping scheme is applied for the first time to the low-Rayleigh-number thermal Elder problem of free convection in porous media [van Reeuwijk M, Mathias SA, Simmons CT, Ward JD. Insights from a pseudospectral approach to the Elder problem. Water Resour Res 2009;45:W04416, http://dx.doi.org/10.1029/2008WR007421.], and to the solutal [Shikaze SG, Sudicky EA, Schwartz FW. Density-dependent solute transport in discretely-fractured geological media: is prediction possible? J Contam Hydrol 1998;34:273-91] problem of free convection in fractured-porous media. Numerical simulations demonstrate that the proposed scheme efficiently limits the temporal truncation error to a user-defined tolerance by controlling the time-step size. The non-iterative second-order time-stepping scheme can be applied to (i) thermal and solutal variable-density flow problems, (ii) linear and non-linear density functions, and (iii) problems including porous and fractured-porous media.

  2. Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Huynh, H. T.

    1997-01-01

    A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.

  3. Non-iterative adaptive time stepping with truncation error control for simulating variable-density flow

    NASA Astrophysics Data System (ADS)

    Hirthe, E. M.; Graf, T.

    2012-04-01

    Fluid density variations occur due to changes in the solute concentration, temperature and pressure of groundwater. Examples are interaction between freshwater and seawater, radioactive waste disposal, groundwater contamination, and geothermal energy production. The physical coupling between flow and transport introduces non-linearity in the governing mathematical equations, such that solving variable-density flow problems typically requires very long computational time. Computational efficiency can be attained through the use of adaptive time-stepping schemes. The aim of this work is therefore to apply a non-iterative adaptive time-stepping scheme based on local truncation error in variable-density flow problems. That new scheme is implemented into the code of the HydroGeoSphere model (Therrien et al., 2011). The new time-stepping scheme is applied to the Elder (1967) and the Shikaze et al. (1998) problem of free convection in porous and fractured-porous media, respectively. Numerical simulations demonstrate that non-iterative time-stepping based on local truncation error control fully automates the time step size and efficiently limits the temporal discretization error to the user-defined tolerance. Results of the Elder problem show that the new time-stepping scheme presented here is significantly more efficient than uniform time-stepping when high accuracy is required. Results of the Shikaze problem reveal that the new scheme is considerably faster than conventional time-stepping where time step sizes are either constant or controlled by absolute head/concentration changes. Future research will focus on the application of the new time-stepping scheme to variable-density flow in complex real-world fractured-porous rock.

  4. Adaptive time steps in trajectory surface hopping simulations.

    PubMed

    Spörkel, Lasse; Thiel, Walter

    2016-05-21

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling. PMID:27208937

  5. Adaptive time steps in trajectory surface hopping simulations

    NASA Astrophysics Data System (ADS)

    Spörkel, Lasse; Thiel, Walter

    2016-05-01

    Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.

  6. An adaptive time-stepping strategy for solving the phase field crystal model

    SciTech Connect

    Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua

    2013-09-15

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.

  7. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  8. Convergence Acceleration for Multistage Time-Stepping Schemes

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli L.; Rossow, C-C; Vasta, V. N.

    2006-01-01

    The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 could be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. Numerical dissipation operators (based on the Roe scheme, a matrix formulation, and the CUSP scheme) as well as the number of RK stages are considered in evaluating the RK/implicit scheme. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. In two dimensions, turbulent flows over an airfoil at subsonic and transonic conditions are computed. The effects of mesh cell aspect ratio on convergence are investigated for Reynolds numbers between 5.7 x 10(exp 6) and 100.0 x 10(exp 6). Results are also obtained for a transonic wing flow. For both 2-D and 3-D problems, the computational time of a well-tuned standard RK scheme is reduced at least a factor of four.

  9. An Explicit Super-Time-Stepping Scheme for Non-Symmetric Parabolic Problems

    NASA Astrophysics Data System (ADS)

    Gurski, K. F.; O'Sullivan, S.

    2010-09-01

    Explicit numerical methods for the solution of a system of differential equations may suffer from a time step size that approaches zero in order to satisfy stability conditions. When the differential equations are dominated by a skew-symmetric component, the problem is that the real eigenvalues are dominated by imaginary eigenvalues. We compare results for stable time step limits for the super-time-stepping method of Alexiades, Amiez, and Gremaud (super-time-stepping methods belong to the Runge-Kutta-Chebyshev class) and a new method modeled on a predictor-corrector scheme with multiplicative operator splitting. This new explicit method increases stability of the original super-time-stepping whenever the skew-symmetric term is nonzero, which occurs in particular convection-diffusion problems and more generally when the iteration matrix represents a nonlinear operator. The new method is stable for skew symmetric dominated systems where the regular super-time-stepping scheme fails. This method is second order in time (may be increased by Richardson extrapolation) and the spatial order is determined by the user's choice of discretization scheme. We present a comparison between the two super-time-stepping methods to show the speed up available for any non-symmetric system using the nearly symmetric Black-Scholes equation as an example.

  10. Adaptive time stepping algorithm for Lagrangian transport models: Theory and idealised test cases

    NASA Astrophysics Data System (ADS)

    Shah, Syed Hyder Ali Muttaqi; Heemink, Arnold Willem; Gräwe, Ulf; Deleersnijder, Eric

    2013-08-01

    Random walk simulations have an excellent potential in marine and oceanic modelling. This is essentially due to their relative simplicity and their ability to represent advective transport without being plagued by the deficiencies of the Eulerian methods. The physical and mathematical foundations of random walk modelling of turbulent diffusion have become solid over the years. Random walk models rest on the theory of stochastic differential equations. Unfortunately, the latter and the related numerical aspects have not attracted much attention in the oceanic modelling community. The main goal of this paper is to help bridge the gap by developing an efficient adaptive time stepping algorithm for random walk models. Its performance is examined on two idealised test cases of turbulent dispersion; (i) pycnocline crossing and (ii) non-flat isopycnal diffusion, which are inspired by shallow-sea dynamics and large-scale ocean transport processes, respectively. The numerical results of the adaptive time stepping algorithm are compared with the fixed-time increment Milstein scheme, showing that the adaptive time stepping algorithm for Lagrangian random walk models is more efficient than its fixed step-size counterpart without any loss in accuracy.

  11. Numerical solution of the Euler equations by finite volume methods using Runge Kutta time stepping schemes

    NASA Technical Reports Server (NTRS)

    Jameson, A.; Schmidt, Wolfgang; Turkel, Eli

    1981-01-01

    A new combination of a finite volume discretization in conjunction with carefully designed dissipative terms of third order, and a Runge Kutta time stepping scheme, is shown to yield an effective method for solving the Euler equations in arbitrary geometric domains. The method has been used to determine the steady transonic flow past an airfoil using an O mesh. Convergence to a steady state is accelerated by the use of a variable time step determined by the local Courant member, and the introduction of a forcing term proportional to the difference between the local total enthalpy and its free stream value.

  12. A class of large time step Godunov schemes for hyperbolic conservation laws and applications

    NASA Astrophysics Data System (ADS)

    Qian, ZhanSen; Lee, Chun-Hian

    2011-08-01

    A large time step (LTS) Godunov scheme firstly proposed by LeVeque is further developed in the present work and applied to Euler equations. Based on the analysis of the computational performances of LeVeque's linear approximation on wave interactions, a multi-wave approximation on rarefaction fan is proposed to avoid the occurrences of rarefaction shocks in computations. The developed LTS scheme is validated using 1-D test cases, manifesting high resolution for discontinuities and the capability of maintaining computational stability when large CFL numbers are imposed. The scheme is then extended to multidimensional problems using dimensional splitting technique; the treatment of boundary condition for this multidimensional LTS scheme is also proposed. As for demonstration problems, inviscid flows over NACA0012 airfoil and ONERA M6 wing with given swept angle are simulated using the developed LTS scheme. The numerical results reveal the high resolution nature of the scheme, where the shock can be captured within 1-2 grid points. The resolution of the scheme would improve gradually along with the increasing of CFL number under an upper bound where the solution becomes severely oscillating across the shock. Computational efficiency comparisons show that the developed scheme is capable of reducing the computational time effectively with increasing the time step (CFL number).

  13. A multistage time-stepping scheme for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, E.

    1985-01-01

    A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.

  14. Large time-step stability of explicit one-dimensional advection schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.

  15. An Adaptive Fourier Filter for Relaxing Time Stepping Constraints for Explicit Solvers

    SciTech Connect

    Gelb, Anne; Archibald, Richard K

    2015-01-01

    Filtering is necessary to stabilize piecewise smooth solutions. The resulting diffusion stabilizes the method, but may fail to resolve the solution near discontinuities. Moreover, high order filtering still requires cost prohibitive time stepping. This paper introduces an adaptive filter that controls spurious modes of the solution, but is not unnecessarily diffusive. Consequently we are able to stabilize the solution with larger time steps, but also take advantage of the accuracy of a high order filter.

  16. Multi-rate time stepping schemes for hydro-geomechanical model for subsurface methane hydrate reservoirs

    NASA Astrophysics Data System (ADS)

    Gupta, Shubhangi; Wohlmuth, Barbara; Helmig, Rainer

    2016-05-01

    We present an extrapolation-based semi-implicit multi-rate time stepping (MRT) scheme and a compound-fast MRT scheme for a naturally partitioned, multi-time-scale hydro-geomechanical hydrate reservoir model. We evaluate the performance of the two MRT methods compared to an iteratively coupled solution scheme and discuss their advantages and disadvantages. The performance of the two MRT methods is evaluated in terms of speed-up and accuracy by comparison to an iteratively coupled solution scheme. We observe that the extrapolation-based semi-implicit method gives a higher speed-up but is strongly dependent on the relative time scales of the latent (slow) and active (fast) components. On the other hand, the compound-fast method is more robust and less sensitive to the relative time scales, but gives lower speed up as compared to the semi-implicit method, especially when the relative time scales of the active and latent components are comparable.

  17. Multi time-step wavefront reconstruction for tomographic adaptive-optics systems.

    PubMed

    Ono, Yoshito H; Akiyama, Masayuki; Oya, Shin; Lardiére, Olivier; Andersen, David R; Correia, Carlos; Jackson, Kate; Bradley, Colin

    2016-04-01

    In tomographic adaptive-optics (AO) systems, errors due to tomographic wavefront reconstruction limit the performance and angular size of the scientific field of view (FoV), where AO correction is effective. We propose a multi time-step tomographic wavefront reconstruction method to reduce the tomographic error by using measurements from both the current and previous time steps simultaneously. We further outline the method to feed the reconstructor with both wind speed and direction of each turbulence layer. An end-to-end numerical simulation, assuming a multi-object AO (MOAO) system on a 30 m aperture telescope, shows that the multi time-step reconstruction increases the Strehl ratio (SR) over a scientific FoV of 10 arc min in diameter by a factor of 1.5-1.8 when compared to the classical tomographic reconstructor, depending on the guide star asterism and with perfect knowledge of wind speeds and directions. We also evaluate the multi time-step reconstruction method and the wind estimation method on the RAVEN demonstrator under laboratory setting conditions. The wind speeds and directions at multiple atmospheric layers are measured successfully in the laboratory experiment by our wind estimation method with errors below 2  ms-1. With these wind estimates, the multi time-step reconstructor increases the SR value by a factor of 1.2-1.5, which is consistent with a prediction from the end-to-end numerical simulation. PMID:27140785

  18. An implicit time-stepping scheme for rigid body dynamics with Coulomb friction

    SciTech Connect

    STEWART,DAVID; TRINKLE,JEFFREY C.

    2000-02-15

    In this paper a new time-stepping method for simulating systems of rigid bodies is given. Unlike methods which take an instantaneous point of view, the method is based on impulse-momentum equations, and so does not need to explicitly resolve impulsive forces. On the other hand, the method is distinct from previous impulsive methods in that it does not require explicit collision checking and it can handle simultaneous impacts. Numerical results are given for one planar and one three-dimensional example, which demonstrate the practicality of the method, and its convergence as the step size becomes small.

  19. Simulating diffusion processes in discontinuous media: A numerical scheme with constant time steps

    SciTech Connect

    Lejay, Antoine; Pichot, Geraldine

    2012-08-30

    In this article, we propose new Monte Carlo techniques for moving a diffusive particle in a discontinuous media. In this framework, we characterize the stochastic process that governs the positions of the particle. The key tool is the reduction of the process to a Skew Brownian motion (SBM). In a zone where the coefficients are locally constant on each side of the discontinuity, the new position of the particle after a constant time step is sampled from the exact distribution of the SBM process at the considered time. To do so, we propose two different but equivalent algorithms: a two-steps simulation with a stop at the discontinuity and a one-step direct simulation of the SBM dynamic. Some benchmark tests illustrate their effectiveness.

  20. An Efficient Time-Stepping Scheme for Ab Initio Molecular Dynamics Simulations

    NASA Astrophysics Data System (ADS)

    Tsuchida, Eiji

    2016-08-01

    In ab initio molecular dynamics simulations of real-world problems, the simple Verlet method is still widely used for integrating the equations of motion, while more efficient algorithms are routinely used in classical molecular dynamics. We show that if the Verlet method is used in conjunction with pre- and postprocessing, the accuracy of the time integration is significantly improved with only a small computational overhead. We also propose several extensions of the algorithm required for use in ab initio molecular dynamics. The validity of the processed Verlet method is demonstrated in several examples including ab initio molecular dynamics simulations of liquid water. The structural properties obtained from the processed Verlet method are found to be sufficiently accurate even for large time steps close to the stability limit. This approach results in a 2× performance gain over the standard Verlet method for a given accuracy. We also show how to generate a canonical ensemble within this approach.

  1. A conservative finite volume scheme with time-accurate local time stepping for scalar transport on unstructured grids

    NASA Astrophysics Data System (ADS)

    Cavalcanti, José Rafael; Dumbser, Michael; Motta-Marques, David da; Fragoso Junior, Carlos Ruberto

    2015-12-01

    In this article we propose a new conservative high resolution TVD (total variation diminishing) finite volume scheme with time-accurate local time stepping (LTS) on unstructured grids for the solution of scalar transport problems, which are typical in the context of water quality simulations. To keep the presentation of the new method as simple as possible, the algorithm is only derived in two space dimensions and for purely convective transport problems, hence neglecting diffusion and reaction terms. The new numerical method for the solution of the scalar transport is directly coupled to the hydrodynamic model of Casulli and Walters (2000) that provides the dynamics of the free surface and the velocity vector field based on a semi-implicit discretization of the shallow water equations. Wetting and drying is handled rigorously by the nonlinear algorithm proposed by Casulli (2009). The new time-accurate LTS algorithm allows a different time step size for each element of the unstructured grid, based on an element-local Courant-Friedrichs-Lewy (CFL) stability condition. The proposed method does not need any synchronization between different time steps of different elements and is by construction locally and globally conservative. The LTS scheme is based on a piecewise linear polynomial reconstruction in space-time using the MUSCL-Hancock method, to obtain second order of accuracy in both space and time. The new algorithm is first validated on some classical test cases for pure advection problems, for which exact solutions are known. In all cases we obtain a very good level of accuracy, showing also numerical convergence results; we furthermore confirm mass conservation up to machine precision and observe an improved computational efficiency compared to a standard second order TVD scheme for scalar transport with global time stepping (GTS). Then, the new LTS method is applied to some more complex problems, where the new scalar transport scheme has also been coupled to

  2. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    SciTech Connect

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent of this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)

  3. Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.; Wedan, B. W.

    1988-01-01

    A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.

  4. Convergence of a class of semi-implicit time-stepping schemes for nonsmooth rigid multibody dynamics.

    SciTech Connect

    Gavrea, B. I.; Anitescu, M.; Potra, F. A.; Mathematics and Computer Science; Univ. of Pennsylvania; Univ. of Maryland

    2008-01-01

    In this work we present a framework for the convergence analysis in a measure differential inclusion sense of a class of time-stepping schemes for multibody dynamics with contacts, joints, and friction. This class of methods solves one linear complementarity problem per step and contains the semi-implicit Euler method, as well as trapezoidal-like methods for which second-order convergence was recently proved under certain conditions. By using the concept of a reduced friction cone, the analysis includes, for the first time, a convergence result for the case that includes joints. An unexpected intermediary result is that we are able to define a discrete velocity function of bounded variation, although the natural discrete velocity function produced by our algorithm may have unbounded variation.

  5. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  6. Numerical simulation of diffusion MRI signals using an adaptive time-stepping method.

    PubMed

    Li, Jing-Rebecca; Calhoun, Donna; Poupon, Cyril; Le Bihan, Denis

    2014-01-20

    The effect on the MRI signal of water diffusion in biological tissues in the presence of applied magnetic field gradient pulses can be modelled by a multiple compartment Bloch-Torrey partial differential equation. We present a method for the numerical solution of this equation by coupling a standard Cartesian spatial discretization with an adaptive time discretization. The time discretization is done using the explicit Runge-Kutta-Chebyshev method, which is more efficient than the forward Euler time discretization for diffusive-type problems. We use this approach to simulate the diffusion MRI signal from the extra-cylindrical compartment in a tissue model of the brain gray matter consisting of cylindrical and spherical cells and illustrate the effect of cell membrane permeability. PMID:24351275

  7. Numerical simulation of diffusion MRI signals using an adaptive time-stepping method

    NASA Astrophysics Data System (ADS)

    Li, Jing-Rebecca; Calhoun, Donna; Poupon, Cyril; Le Bihan, Denis

    2014-01-01

    The effect on the MRI signal of water diffusion in biological tissues in the presence of applied magnetic field gradient pulses can be modelled by a multiple compartment Bloch-Torrey partial differential equation. We present a method for the numerical solution of this equation by coupling a standard Cartesian spatial discretization with an adaptive time discretization. The time discretization is done using the explicit Runge-Kutta-Chebyshev method, which is more efficient than the forward Euler time discretization for diffusive-type problems. We use this approach to simulate the diffusion MRI signal from the extra-cylindrical compartment in a tissue model of the brain gray matter consisting of cylindrical and spherical cells and illustrate the effect of cell membrane permeability.

  8. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  9. Unconditionally energy stable time stepping scheme for Cahn-Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    NASA Astrophysics Data System (ADS)

    Tavakoli, Rouhollah

    2016-01-01

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn-Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate the success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results.

  10. A novel adaptive time stepping variant of the Boris–Buneman integrator for the simulation of particle accelerators with space charge

    SciTech Connect

    Toggweiler, Matthias; Adelmann, Andreas; Arbenz, Peter; Yang, Jianjun

    2014-09-15

    We show that adaptive time stepping in particle accelerator simulation is an enhancement for certain problems. The new algorithm has been implemented in the OPAL (Object Oriented Parallel Accelerator Library) framework. The idea is to adjust the frequency of costly self-field calculations, which are needed to model Coulomb interaction (space charge) effects. In analogy to a Kepler orbit simulation that requires a higher time step resolution at the close encounter, we propose to choose the time step based on the magnitude of the space charge forces. Inspired by geometric integration techniques, our algorithm chooses the time step proportional to a function of the current phase space state instead of calculating a local error estimate like a conventional adaptive procedure. Building on recent work, a more profound argument is given on how exactly the time step should be chosen. An intermediate algorithm, initially built to allow a clearer analysis by introducing separate time steps for external field and self-field integration, turned out to be useful by its own, for a large class of problems.

  11. Empirical versus time stepping with embedded error control for density-driven flow in porous media

    NASA Astrophysics Data System (ADS)

    Younes, Anis; Ackerer, Philippe

    2010-08-01

    Modeling density-driven flow in porous media may require very long computational time due to the nonlinear coupling between flow and transport equations. Time stepping schemes are often used to adapt the time step size in order to reduce the computational cost of the simulation. In this work, the empirical time stepping scheme which adapts the time step size according to the performance of the iterative nonlinear solver is compared to an adaptive time stepping scheme where the time step length is controlled by the temporal truncation error. Results of the simulations of the Elder problem show that (1) the empirical time stepping scheme can lead to inaccurate results even with a small convergence criterion, (2) accurate results are obtained when the time step size selection is based on the truncation error control, (3) a non iterative scheme with proper time step management can be faster and leads to more accurate solution than the standard iterative procedure with the empirical time stepping and (4) the temporal truncation error can have a significant effect on the results and can be considered as one of the reasons for the differences observed in the Elder numerical results.

  12. An adaptive Cartesian control scheme for manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  13. Adaptive MPEG-2 video data hiding scheme

    NASA Astrophysics Data System (ADS)

    Sarkar, Anindya; Madhow, Upamanyu; Chandrasekaran, Shivkumar; Manjunath, Bangalore S.

    2007-02-01

    We have investigated adaptive mechanisms for high-volume transform-domain data hiding in MPEG-2 video which can be tuned to sustain varying levels of compression attacks. The data is hidden in the uncompressed domain by scalar quantization index modulation (QIM) on a selected set of low-frequency discrete cosine transform (DCT) coefficients. We propose an adaptive hiding scheme where the embedding rate is varied according to the type of frame and the reference quantization parameter (decided according to MPEG-2 rate control scheme) for that frame. For a 1.5 Mbps video and a frame-rate of 25 frames/sec, we are able to embed almost 7500 bits/sec. Also, the adaptive scheme hides 20% more data and incurs significantly less frame errors (frames for which the embedded data is not fully recovered) than the non-adaptive scheme. Our embedding scheme incurs insertions and deletions at the decoder which may cause de-synchronization and decoding failure. This problem is solved by the use of powerful turbo-like codes and erasures at the encoder. The channel capacity estimate gives an idea of the minimum code redundancy factor required for reliable decoding of hidden data transmitted through the channel. To that end, we have modeled the MPEG-2 video channel using the transition probability matrices given by the data hiding procedure, using which we compute the (hiding scheme dependent) channel capacity.

  14. Automatic Time Stepping with Global Error Control for Groundwater Flow Models

    SciTech Connect

    Tang, Guoping

    2008-09-01

    An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.

  15. Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.

    2004-01-01

    The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux

  16. An adaptive control scheme for coordinated multimanipulator systems

    SciTech Connect

    Jonghann Jean; Lichen Fu . Dept. of Electrical Engineering)

    1993-04-01

    The problem of adaptive coordinated control of multiple robot arms transporting an object is addressed. A stable adaptive control scheme for both trajectory tracking and internal force control is presented. Detailed analyses on tracking properties of the object position, velocity and the internal forces exerted on the object are given. It is shown that this control scheme can achieve satisfactory tracking performance without using the measurement of contact forces and their derivatives. It can be shown that this scheme can be realized by decentralized implementation to reduce the computational burden. Moreover, some efficient adaptive control strategies can be incorporated to reduce the computational complexity.

  17. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  18. A discrete-time adaptive control scheme for robot manipulators

    NASA Technical Reports Server (NTRS)

    Tarokh, M.

    1990-01-01

    A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.

  19. Accelerating spectral-element simulations of seismic wave propagation using local time stepping

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Rietmann, M.; Galvez, P.; Nissen-Meyer, T.; Grote, M.; Schenk, O.

    2013-12-01

    Seismic tomography using full-waveform inversion requires accurate simulations of seismic wave propagation in complex 3D media. However, finite element meshing in complex media often leads to areas of local refinement, generating small elements that accurately capture e.g. strong topography and/or low-velocity sediment basins. For explicit time schemes, this dramatically reduces the global time-step for wave-propagation problems due to numerical stability conditions, ultimately making seismic inversions prohibitively expensive. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. Numerical simulations are thus liberated of global time-step constraints potentially speeding up simulation runtimes significantly. We present here a new, efficient multi-level LTS-Newmark scheme for general use with spectral-element methods (SEM) with applications in seismic wave propagation. We fit the implementation of our scheme onto the package SPECFEM3D_Cartesian, which is a widely used community code, simulating seismic and acoustic wave propagation in earth-science applications. Our new LTS scheme extends the 2nd-order accurate Newmark time-stepping scheme, and leads to an efficient implementation, producing real-world speedup of multi-resolution seismic applications. Furthermore, we generalize the method to utilize many refinement levels with a design specifically for continuous finite elements. We demonstrate performance speedup using a state-of-the-art dynamic earthquake rupture model for the Tohoku-Oki event, which is currently limited by small elements along the rupture fault. Utilizing our new algorithmic LTS implementation together with advances in exploiting graphic processing units (GPUs), numerical seismic wave propagation simulations in complex media will dramatically reduce computation times, empowering high

  20. Hydrologic consistency analysed through modeling at multiple time steps: does hydrological model performance benefit from finer time step information?

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2015-04-01

    We investigate the operational utility of fine time step hydro-climatic information using a large catchment data set. The originality of this data set lies in the availability of precipitation data from the 6-minute rain gauges of Météo-France, and in the size of the catchment set (217 French catchments in total). The rainfall-runoff model used (GR4) has been adapted to hourly and sub-hourly time steps (up to 6-minute) from the daily time step version (Perrin et al., 2003). The model is applied at different time steps ranging from 6-minute to 1 day (6-, 12-, 30-minute, 1-, 3-, 6-, 12-hour and 1 day) and the evolution of model performance for each catchment is evaluated at the daily time step by aggregation of model outputs. Three classes of behavior are found according to the trend of model performance as the time step becomes finer: (i) catchments presenting an improvement of model performance; (ii) catchments with a model performance insensitive to the time step; (iii) catchments for which the performance even deteriorates as the time step becomes finer. The reasons behind these different trends are investigated from a hydrological point of view, by relating the model sensitivity to data at finer time step to catchment descriptors. References: Perrin, C., C. Michel and V. Andréassian (2003), "Improvement of a parsimonious model for streamflow simulation", Journal of Hydrology, 279(1-4): 275-289.

  1. Extrapolated implicit-explicit time stepping.

    SciTech Connect

    Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.

    2010-01-01

    This paper constructs extrapolated implicit-explicit time stepping methods that allow one to efficiently solve problems with both stiff and nonstiff components. The proposed methods are based on Euler steps and can provide very high order discretizations of ODEs, index-1 DAEs, and PDEs in the method-of-lines framework. Implicit-explicit schemes based on extrapolation are simple to construct, easy to implement, and straightforward to parallelize. This work establishes the existence of perturbed asymptotic expansions of global errors, explains the convergence orders of these methods, and studies their linear stability properties. Numerical results with stiff ODE, DAE, and PDE test problems confirm the theoretical findings and illustrate the potential of these methods to solve multiphysics multiscale problems.

  2. A generic efficient adaptive grid scheme for rocket propulsion modeling

    NASA Technical Reports Server (NTRS)

    Mo, J. D.; Chow, Alan S.

    1993-01-01

    The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.

  3. Accuracy-based time step criteria for solving parabolic equations

    SciTech Connect

    Mohtar, R.; Segerlind, L.

    1995-12-31

    Parabolic equations govern many transient engineering problems. Space integration using finite element or finite difference methods changes the parabolic partial differential equation into an ordinary differential equation. Time integration schemes are needed to solve the later equation. In order to accurately perform the later integration a proper time step must be provided. Time step estimates based on a stability criteria have been prescribed in the literature. The following paper presents time step estimates that satisfy stability as well as accuracy criteria. These estimates were correlated to the Froude and Courant Numbers. The later criteria were found to be overly conservative for some integration schemes. Suggestions as to which time integration scheme is the best to use are also presented.

  4. An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery.

    PubMed

    Leng, Xiangguang; Ji, Kefeng; Zhou, Shilin; Xing, Xiangwei; Zou, Huanxin

    2016-01-01

    With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way. PMID:27563902

  5. Adaptive Coding and Modulation Scheme for Ka Band Space Communications

    NASA Astrophysics Data System (ADS)

    Lee, Jaeyoon; Yoon, Dongweon; Lee, Wooju

    2010-06-01

    Rain attenuation can cause a serious problem that an availability of space communication link on Ka band becomes low. To reduce the effect of rain attenuation on the error performance of space communications in Ka band, an adaptive coding and modulation (ACM) scheme is required. In this paper, to achieve a reliable telemetry data transmission, we propose an adaptive coding and modulation level using turbo code recommended by the consultative committee for space data systems (CCSDS) and various modulation methods (QPSK, 8PSK, 4+12 APSK, and 4+12+16 APSK) adopted in the digital video broadcasting-satellite2 (DVB-S2).

  6. Image edge detection based on adaptive lifting scheme

    NASA Astrophysics Data System (ADS)

    Xia, Ping; Xiang, Xuejun; Wan, Junli

    2009-10-01

    Image edge is because the gradation is the result of not continuously, is image's information basic characteristic, is also one of hot topics in image processing. This paper analyzes traditional arithmetic of image edge detection and existing problem, uses adaptive lifting wavelet analysis, adaptive adjusts the predict filter and the update filter according to information's partial characteristic, thus realizes the processing information accurate match; at the same time, improves the wavelet edge detection operator, realizes one kind to be suitable for the adaptive lifting scheme image edge detection's algorithm, and applies this method in the medicine image edge detection. The experiment results show that this paper's algorithm is better than the traditional algorithm effect.

  7. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    SciTech Connect

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.

  8. Towards Adaptive High-Resolution Images Retrieval Schemes

    NASA Astrophysics Data System (ADS)

    Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.

    2016-06-01

    Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.

  9. Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)

    2000-01-01

    This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.

  10. A Stochastic, Resonance-Free Multiple Time-Step Algorithm for Polarizable Models That Permits Very Large Time Steps.

    PubMed

    Margul, Daniel T; Tuckerman, Mark E

    2016-05-10

    Molecular dynamics remains one of the most widely used computational tools in the theoretical molecular sciences to sample an equilibrium ensemble distribution and/or to study the dynamical properties of a system. The efficiency of a molecular dynamics calculation is limited by the size of the time step that can be employed, which is dictated by the highest frequencies in the system. However, many properties of interest are connected to low-frequency, long time-scale phenomena, requiring many small time steps to capture. This ubiquitous problem can be ameliorated by employing multiple time-step algorithms, which assign different time steps to forces acting on different time scales. In such a scheme, fast forces are evaluated more frequently than slow forces, and as the former are often computationally much cheaper to evaluate, the savings can be significant. Standard multiple time-step approaches are limited, however, by resonance phenomena, wherein motion on the fastest time scales limits the step sizes that can be chosen for the slower time scales. In atomistic models of biomolecular systems, for example, the largest time step is typically limited to around 5 fs. Previously, we introduced an isokinetic extended phase-space algorithm (Minary et al. Phys. Rev. Lett. 2004, 93, 150201) and its stochastic analog (Leimkuhler et al. Mol. Phys. 2013, 111, 3579) that eliminate resonance phenomena through a set of kinetic energy constraints. In simulations of a fixed-charge flexible model of liquid water, for example, the time step that could be assigned to the slow forces approached 100 fs. In this paper, we develop a stochastic isokinetic algorithm for multiple time-step molecular dynamics calculations using a polarizable model based on fluctuating dipoles. The scheme developed here employs two sets of induced dipole moments, specifically, those associated with short-range interactions and those associated with a full set of interactions. The scheme is demonstrated on

  11. Highly accurate adaptive finite element schemes for nonlinear hyperbolic problems

    NASA Astrophysics Data System (ADS)

    Oden, J. T.

    1992-08-01

    This document is a final report of research activities supported under General Contract DAAL03-89-K-0120 between the Army Research Office and the University of Texas at Austin from July 1, 1989 through June 30, 1992. The project supported several Ph.D. students over the contract period, two of which are scheduled to complete dissertations during the 1992-93 academic year. Research results produced during the course of this effort led to 6 journal articles, 5 research reports, 4 conference papers and presentations, 1 book chapter, and two dissertations (nearing completion). It is felt that several significant advances were made during the course of this project that should have an impact on the field of numerical analysis of wave phenomena. These include the development of high-order, adaptive, hp-finite element methods for elastodynamic calculations and high-order schemes for linear and nonlinear hyperbolic systems. Also, a theory of multi-stage Taylor-Galerkin schemes was developed and implemented in the analysis of several wave propagation problems, and was configured within a general hp-adaptive strategy for these types of problems. Further details on research results and on areas requiring additional study are given in the Appendix.

  12. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  13. Multiple-time-stepping generalized hybrid Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  14. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  15. An adaptive identification and control scheme for large space structures

    NASA Technical Reports Server (NTRS)

    Carroll, J. V.

    1988-01-01

    A unified identification and control scheme capable of achieving space at form performance objectives under nominal or failure conditions is described. Preliminary results are also presented, showing that the methodology offers much promise for effective robust control of large space structures. The control method is a multivariable, adaptive, output predictive controller called Model Predictive Control (MPC). MPC uses a state space model and input reference trajectories of set or tracking points to adaptively generate optimum commands. For a fixed model, MPC processes commands with great efficiency, and is also highly robust. A key feature of MPC is its ability to control either nonminimum phase or open loop unstable systems. As an output controller, MPC does not explicitly require full state feedback, as do most multivariable (e.g., Linear Quadratic) methods. Its features are very useful in LSS operations, as they allow non-collocated actuators and sensors. The identification scheme is based on canonical variate analysis (CVA) of input and output data. The CVA technique is particularly suited for the measurement and identification of structural dynamic processes - that is, unsteady transient or dynamically interacting processes such as between aerodynamics and structural deformation - from short, noisy data. CVA is structured so that the identification can be done in real or near real time, using computationally stable algorithms. Modeling LSS dynamics in 1-g laboratories has always been a major impediment not only to understanding their behavior in orbit, but also to controlling it. In cases where the theoretical model is not confirmed, current methods provide few clues concerning additional dynamical relationships that are not included in the theoretical models. CVA needs no a priori model data, or structure; all statistically significant dynamical states are determined using natural, entropy-based methods. Heretofore, a major limitation in applying adaptive

  16. Importance of variable time-step algorithms in spatial kinetics calculations

    SciTech Connect

    Aviles, B.N.

    1994-12-31

    The use of spatial kinetics codes in conjunction with advanced thermal-hydraulics codes is becoming more widespread as better methods and faster computers appear. The integrated code packages are being used for routine nuclear power plant design and analysis, including simulations with instrumentation and control systems initiating system perturbations such as rod motion and scrams. As a result, it is important to include a robust variable time-step algorithm that can accurately and efficiently follow widely varying plant neutronic behavior. This paper describes the variable time-step algorithm in SPANDEX and compares the automatic time-step scheme with a more traditional fixed time-step scheme.

  17. Simulating system dynamics with arbitrary time step

    NASA Astrophysics Data System (ADS)

    Kantorovich, L.

    2007-02-01

    We suggest a dynamic simulation method that allows efficient and realistic modeling of kinetic processes, such as atomic diffusion, in which time has its actual meaning. Our method is similar in spirit to widely used kinetic Monte Carlo (KMC) techniques; however, in our approach, the time step can be chosen arbitrarily. This has an advantage in some cases, e.g., when the transition rates change with time sufficiently fast over the period of the KMC time step (e.g., due to time dependence of some external factors influencing kinetics such as moving scanning probe microscopy tip or external time-dependent field) or when the clock time is set by some external conditions, and it is convenient to use equal time steps instead of the random choice of the KMC algorithm in order to build up probability distribution functions. We show that an arbitrary choice of the time step can be afforded by building up the complete list of events including the “residence site” and multihop transitions. The idea of the method is illustrated in a simple “toy” model of a finite one-dimensional lattice of potential wells with unequal jump rates to either side, which can be studied analytically. We show that for any choice of the time step, our general kinetics method reproduces exactly the solution of the corresponding master equations for any choice of the time steps. The final kinetics also matches the standard KMC, and this allows better understanding of this algorithm, in which the time step is chosen in a certain way and the system always advances by a single hop.

  18. Higher-order schemes with CIP method and adaptive Soroban grid towards mesh-free scheme

    NASA Astrophysics Data System (ADS)

    Yabe, Takashi; Mizoe, Hiroki; Takizawa, Kenji; Moriki, Hiroshi; Im, Hyo-Nam; Ogata, Youichi

    2004-02-01

    A new class of body-fitted grid system that can keep the third-order accuracy in time and space is proposed with the help of the CIP (constrained interpolation profile/cubic interpolated propagation) method. The grid system consists of the straight lines and grid points moving along these lines like abacus - Soroban in Japanese. The length of each line and the number of grid points in each line can be different. The CIP scheme is suitable to this mesh system and the calculation of large CFL (>10) at locally refined mesh is easily performed. Mesh generation and searching of upstream departure point are very simple and almost mesh-free treatment is possible. Adaptive grid movement and local mesh refinement are demonstrated.

  19. Attitude determination using an adaptive multiple model filtering Scheme

    NASA Technical Reports Server (NTRS)

    Lam, Quang; Ray, Surendra N.

    1995-01-01

    Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown

  20. Attitude determination using an adaptive multiple model filtering Scheme

    NASA Astrophysics Data System (ADS)

    Lam, Quang; Ray, Surendra N.

    1995-05-01

    Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown

  1. Low color distortion adaptive dimming scheme for power efficient LCDs

    NASA Astrophysics Data System (ADS)

    Nam, Hyoungsik; Song, Eun-Ji

    2013-06-01

    This paper demonstrates the color compensation algorithm to reduce the color distortion caused by mismatches between the reference gamma value of a dimming algorithm and the display gamma values of an LCD panel in a low power adaptive dimming scheme. In 2010, we presented the YrYgYb algorithm, which used the display gamma values extracted from the luminance data of red, green, and blue sub-pixels, Yr, Yg, and Yb, with the simulation results. It was based on the ideal panel model where the color coordinates were maintained at the fixed values over the gray levels. Whereas, this work introduces an XrYgZb color compensation algorithm which obtains the display gamma values of red, green, and blue from the different tri-stimulus data of Xr, Yg, and Zb, to obtain further reduction on the color distortion. Both simulation and measurement results ensure that a XrYgZb algorithm outperforms a previous YrYgYb algorithm. In simulation which has been conducted at the practical model derived from the measured data, the XrYgZb scheme achieves lower maximum and average color difference values of 3.7743 and 0.6230 over 24 test picture images, compared to 4.864 and 0.7156 in the YrYgYb one. In measurement of a 19-inch LCD panel, the XrYgZb method also accomplishes smaller color difference values of 1.444072 and 5.588195 over 49 combinations of red, green, and blue data, compared to 1.50578 and 6.00403 of the YrYgYb at the backlight dimming ratios of 0.85 and 0.4.

  2. Vectorizable algorithms for adaptive schemes for rapid analysis of SSME flows

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley

    1987-01-01

    An initial study into vectorizable algorithms for use in adaptive schemes for various types of boundary value problems is described. The focus is on two key aspects of adaptive computational methods which are crucial in the use of such methods (for complex flow simulations such as those in the Space Shuttle Main Engine): the adaptive scheme itself and the applicability of element-by-element matrix computations in a vectorizable format for rapid calculations in adaptive mesh procedures.

  3. Adaptive lifting scheme with sparse criteria for image coding

    NASA Astrophysics Data System (ADS)

    Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2012-12-01

    Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an ℓ 1 criterion instead of an ℓ 2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted ℓ 1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.

  4. An adaptive nonlinear solution scheme for reservoir simulation

    SciTech Connect

    Lett, G.S.

    1996-12-31

    Numerical reservoir simulation involves solving large, nonlinear systems of PDE with strongly discontinuous coefficients. Because of the large demands on computer memory and CPU, most users must perform simulations on very coarse grids. The average properties of the fluids and rocks must be estimated on these grids. These coarse grid {open_quotes}effective{close_quotes} properties are costly to determine, and risky to use, since their optimal values depend on the fluid flow being simulated. Thus, they must be found by trial-and-error techniques, and the more coarse the grid, the poorer the results. This paper describes a numerical reservoir simulator which accepts fine scale properties and automatically generates multiple levels of coarse grid rock and fluid properties. The fine grid properties and the coarse grid simulation results are used to estimate discretization errors with multilevel error expansions. These expansions are local, and identify areas requiring local grid refinement. These refinements are added adoptively by the simulator, and the resulting composite grid equations are solved by a nonlinear Fast Adaptive Composite (FAC) Grid method, with a damped Newton algorithm being used on each local grid. The nonsymmetric linear system of equations resulting from Newton`s method are in turn solved by a preconditioned Conjugate Gradients-like algorithm. The scheme is demonstrated by performing fine and coarse grid simulations of several multiphase reservoirs from around the world.

  5. Interpersonal Adaptation in the Urban School: Development and Application of a Sensitizing Conceptual Scheme.

    ERIC Educational Resources Information Center

    Johnson, Burke; Strodl, Peter

    This paper presents a sensitizing conceptual scheme for examining interpersonal adaptation in urban classrooms. The construct "interpersonal adaptation" is conceptualized as the interaction of individual/personality factors, interpersonal factors, and social/cultural factors. The model is applied to the urban school. The conceptual scheme asserts…

  6. IMEX-a : an adaptive, fifth order implicit-explicit integration scheme.

    SciTech Connect

    Brake, Matthew Robert

    2013-05-01

    This report presents an efficient and accurate method for integrating a system of ordinary differential equations, particularly those arising from a spatial discretization of partially differential equations. The algorithm developed, termed the IMEX a algorithm, belongs to a class of algorithms known as implicit-explicit (IMEX) methods. The explicit step is based on a fifth order Runge-Kutta explicit step known as the Dormand-Prince algorithm, which adaptively modifies the time step by calculating the error relative to a fourth order estimation. The implicit step, which follows the explicit step, is based on a backward Euler method, a special case of the generalized trapezoidal method. Reasons for choosing both of these methods, along with the algorithm development are presented. In applications that have less stringent accuracy requirements, several other methods are available through the IMEX a toolbox, each of which simplify the fifth order Dormand-Prince explicit step: the third order Bogacki-Shampine method, the second order Midpoint method, and the first order Euler method. The performance of the algorithm is evaluated on to examples. First, a two pawl system with contact is modeled. Results predicted by the IMEX a algorithm are compared to those predicted by six widely used integration schemes. The IMEX a algorithm is demonstrated to be significantly faster (by up to an order of magnitude) and at least as accurate as all of the other methods considered. A second example, an acoustic standing wave, is presented in order to assess the accuracy of the IMEX a algorithm. Finally, sample code is given in order to demonstrate the implementation of the proposed algorithm.

  7. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    SciTech Connect

    Finn, John M.

    2015-03-15

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012

  8. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE PAGESBeta

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less

  9. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  10. A method for improving time-stepping numerics

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-04-01

    In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.

  11. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  12. Multiple time step integrators in ab initio molecular dynamics

    SciTech Connect

    Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  13. An adaptive interpolation scheme for molecular potential energy surfaces.

    PubMed

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-28

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task-especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version. PMID:27586901

  14. Adaptive nonseparable vector lifting scheme for digital holographic data compression.

    PubMed

    Xing, Yafei; Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Dufaux, Frédéric

    2015-01-01

    Holographic data play a crucial role in recent three-dimensional imaging as well as microscopic applications. As a result, huge amounts of storage capacity will be involved for this kind of data. Therefore, it becomes necessary to develop efficient hologram compression schemes for storage and transmission purposes. In this paper, we focus on the shifted distance information, obtained by the phase-shifting algorithm, where two sets of difference data need to be encoded. More precisely, a nonseparable vector lifting scheme is investigated in order to exploit the two-dimensional characteristics of the holographic contents. Simulations performed on different digital holograms have shown the effectiveness of the proposed method in terms of bitrate saving and quality of object reconstruction. PMID:25967029

  15. An adaptive additive inflation scheme for Ensemble Kalman Filters

    NASA Astrophysics Data System (ADS)

    Sommer, Matthias; Janjic, Tijana

    2016-04-01

    Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.

  16. Simulating stochastic dynamics using large time steps.

    PubMed

    Corradini, O; Faccioli, P; Orland, H

    2009-12-01

    We present an approach to investigate the long-time stochastic dynamics of multidimensional classical systems, in contact with a heat bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short- and long-time scales and both molecular dynamics or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time-resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor approximately 100 larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape. PMID:20365123

  17. A Quasi-Conservative Adaptive Semi-Lagrangian Advection-Diffusion Scheme

    NASA Astrophysics Data System (ADS)

    Behrens, Joern

    2014-05-01

    Many processes in atmospheric or oceanic tracer transport are conveniently represented by advection-diffusion type equations. Depending on the magnitudes of both components, the mathematical representation and consequently the discretization is a non-trivial problem. We will focus on advection-dominated situations and will introduce a semi-Lagrangian scheme with adaptive mesh refinement for high local resolution. This scheme is well suited for pollutant transport from point sources, or transport processes featuring fine filamentation with corresponding local concentration maxima. In order to achieve stability, accuracy and conservation, we combine an adaptive mesh refinement quasi-conservative semi-Lagrangian scheme, based on an integral formulation of the underlying advective conservation law (Behrens, 2006), with an advection diffusion scheme as described by Spiegelman and Katz (2006). The resulting scheme proves to be conservative and stable, while maintaining high computational efficiency and accuracy.

  18. Modeling scramjet combustor flowfields with a grid adaptation scheme

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, R.; Singh, D. J.

    1994-01-01

    The accurate description of flow features associated with the normal injection of fuel into supersonic primary flows is essential in the design of efficient engines for hypervelocity aerospace vehicles. The flow features in such injections are complex with multiple interactions between shocks and between shocks boundary layers. Numerical studies of perpendicular sonic N2 injection and mixing in a Mach 3.8 scramjet combustor environment are discussed. A dynamic grid adaptation procedure based on the equilibration of spring-mass system is employed to enhanced the description of the complicated flow features. Numerical results are compared with experimental measurements and indicate that the adaptation procedure enhances the capability of the modeling procedure to describe the flow features associated with scramjet combustor components.

  19. Automatic multirate methods for ordinary differential equations. [Adaptive time steps

    SciTech Connect

    Gear, C.W.

    1980-01-01

    A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.

  20. Welding Adaptive Functions Performed Through Infrared (IR) Simplified Vision Schemes

    NASA Astrophysics Data System (ADS)

    Begin, Ghlslain; Boillot, Jean-Paul

    1984-02-01

    An ideal integrated robotic welding system should incorporate off-line programmation with the possibility of real time modifications of a given welding programme. Off-line programmation makes possible the optimization of the various sequences of a programme by simulation and therefore promotes increased welding station duty cycle. Real time modifications of a given programme, generated either by an off-line programmation scheme or by a learn mode on a first piece of a series, are essential because on many occasions, the cumulative dimensional tolerances and the distorsions associated with the process, build up a misfit beetween the programmed welding path and the real joint to be welded, to the extent that welding defects occur.

  1. Adaptive regularized scheme for remote sensing image fusion

    NASA Astrophysics Data System (ADS)

    Tang, Sizhang; Shen, Chaomin; Zhang, Guixu

    2016-06-01

    We propose an adaptive regularized algorithm for remote sensing image fusion based on variational methods. In the algorithm, we integrate the inputs using a "grey world" assumption to achieve visual uniformity. We propose a fusion operator that can automatically select the total variation (TV)-L1 term for edges and L2-terms for non-edges. To implement our algorithm, we use the steepest descent method to solve the corresponding Euler-Lagrange equation. Experimental results show that the proposed algorithm achieves remarkable results.

  2. Design of adaptive steganographic schemes for digital images

    NASA Astrophysics Data System (ADS)

    Filler, Tomás; Fridrich, Jessica

    2011-02-01

    Most steganographic schemes for real digital media embed messages by minimizing a suitably defined distortion function. In practice, this is often realized by syndrome codes which offer near-optimal rate-distortion performance. However, the distortion functions are designed heuristically and the resulting steganographic algorithms are thus suboptimal. In this paper, we present a practical framework for optimizing the parameters of additive distortion functions to minimize statistical detectability. We apply the framework to digital images in both spatial and DCT domain by first defining a rich parametric model which assigns a cost of making a change at every cover element based on its neighborhood. Then, we present a practical method for optimizing the parameters with respect to a chosen detection metric and feature space. We show that the size of the margin between support vectors in soft-margin SVMs leads to a fast detection metric and that methods minimizing the margin tend to be more secure w.r.t. blind steganalysis. The parameters obtained by the Nelder-Mead simplex-reflection algorithm for spatial and DCT-domain images are presented and the new embedding methods are tested by blind steganalyzers utilizing various feature sets. Experimental results show that as few as 80 images are sufficient for obtaining good candidates for parameters of the cost model, which allows us to speed up the parameter search.

  3. Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme

    NASA Astrophysics Data System (ADS)

    Hickmann, K. S.; Godinez, H. C.

    2015-12-01

    When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.

  4. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  5. A high fuel consumption efficiency management scheme for PHEVs using an adaptive genetic algorithm.

    PubMed

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  6. Sensitivity of a thermodynamic sea ice model with leads to time step size

    NASA Technical Reports Server (NTRS)

    Ledley, T. S.

    1985-01-01

    The characteristics of sea ice models, developed to study the physics of the growth and melt of ice at the ocean surface and the variations in ice extent, depend on the size of the time step. Thus, to study longer-term variations within a reasonable computer budget, a model with a scheme allowing longer time steps has been constructed. However, the results produced by the model can definitely depend on the length of the time step. The sensitivity of a model to time-step size can be reduced by appropriate approaches. The present investigation is concerned with experiments which use a formulation of a lead parameterization that can be considered as a first step toward the development of a lead parameterization suitable for a use in long-term climate studies.

  7. Collocation and Galerkin Time-Stepping Methods

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2011-01-01

    We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.

  8. A new time-stepping method for regional climate models

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2010-12-01

    The dynamical cores of many regional climate models use the Robert-Asselin filter to suppress the spurious computational mode of the leapfrog scheme. Unfortunately, whilst successfully eliminating the unwanted mode, the Robert-Asselin filter also weakly suppresses the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the filter does not conserve the mean state, averaged over the three time slices on which it operates. This presentation proposes a simple modification to the Robert-Asselin filter, which does conserve the three-time-level mean state. When used in conjunction with the leapfrog scheme, the modification vastly reduces the artificial damping of the physical solution. Correspondingly, the modification increases the numerical accuracy for amplitude errors by two orders, yielding third-order accuracy. The modified filter may easily be incorporated into existing regional climate models, via the addition of only a few lines of code that are computationally very inexpensive. Results will be shown from recent implementations of the modified filter in various models. The modification will be shown to reduce model biases and to significantly improve the predictive skill. Magnitude of the complex amplification factor as a function of the non-dimensional time step, for leapfrog integrations. This quantity would be identical to 1 for a perfect numerical scheme. Clearly, the filter proposed here (case α=0.53) has much smaller numerical errors than the original Robert-Asselin filter (case α=1). Moreover, the proposed filter is trivial to implement and is no more computationally expensive. Taken from Williams (2009; Monthly Weather Review).

  9. Acceleration of the chemistry solver for modeling DI engine combustion using dynamic adaptive chemistry (DAC) schemes

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.

    2010-03-01

    Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species

  10. A Self-Adaptive Behavior-Aware Recruitment Scheme for Participatory Sensing.

    PubMed

    Zeng, Yuanyuan; Li, Deshi

    2015-01-01

    Participatory sensing services utilizing the abundant social participants with sensor-enabled handheld smart device resources are gaining high interest nowadays. One of the challenges faced is the recruitment of participants by fully utilizing their daily activity behavior with self-adaptiveness toward the realistic application scenarios. In the paper, we propose a self-adaptive behavior-aware recruitment scheme for participatory sensing. People are assumed to join the sensing tasks along with their daily activity without pre-defined ground truth or any instructions. The scheme is proposed to model the tempo-spatial behavior and data quality rating to select participants for participatory sensing campaign. Based on this, the recruitment is formulated as a linear programming problem by considering tempo-spatial coverage, data quality, and budget. The scheme enables one to check and adjust the recruitment strategy adaptively according to application scenarios. The evaluations show that our scheme provides efficient sensing performance as stability, low-cost, tempo-spatial correlation and self-adaptiveness. PMID:26389910

  11. A Self-Adaptive Behavior-Aware Recruitment Scheme for Participatory Sensing

    PubMed Central

    Zeng, Yuanyuan; Li, Deshi

    2015-01-01

    Participatory sensing services utilizing the abundant social participants with sensor-enabled handheld smart device resources are gaining high interest nowadays. One of the challenges faced is the recruitment of participants by fully utilizing their daily activity behavior with self-adaptiveness toward the realistic application scenarios. In the paper, we propose a self-adaptive behavior-aware recruitment scheme for participatory sensing. People are assumed to join the sensing tasks along with their daily activity without pre-defined ground truth or any instructions. The scheme is proposed to model the tempo-spatial behavior and data quality rating to select participants for participatory sensing campaign. Based on this, the recruitment is formulated as a linear programming problem by considering tempo-spatial coverage, data quality, and budget. The scheme enables one to check and adjust the recruitment strategy adaptively according to application scenarios. The evaluations show that our scheme provides efficient sensing performance as stability, low-cost, tempo-spatial correlation and self-adaptiveness. PMID:26389910

  12. Finite volume scheme with quadratic reconstruction on unstructured adaptive meshes applied to turbomachinery flows

    SciTech Connect

    Delanaye, M.; Essers, J.A.

    1997-04-01

    This paper presents a new finite volume cell-centered scheme for solving the two-dimensional Euler equations. The technique for computing the advective derivatives is based on a high-order Gauss quadrature and an original quadratic reconstruction of the conservative variables for each control volume. A very sensitive detector identifying discontinuity regions switches the scheme to a TVD scheme, and ensures the monotonicity of the solution. The code uses unstructured meshes whose cells are polygons with any number of edges. A mesh adaptation based on cell division is performed in order to increase the resolution of shocks. The accuracy, insensitivity to grid distortions, and shock capturing properties of the scheme are demonstrated for different cascade flow computations.

  13. Adaptive QoS Class Allocation Schemes in Multi-Domain Path-Based Networks

    NASA Astrophysics Data System (ADS)

    Ogino, Nagao; Nakamura, Hajime

    MPLS-based path technology shows promise as a means of realizing reliable IP networks. Real-time services such as VoIP and video-conference supplied through a multi-domain MPLS network must be able to guarantee end-to-end QoS of the inter-domain paths. Thus, it is important to allocate an appropriate QoS class to the inter-domain paths in each domain traversed by the inter-domain paths. Because each domain has its own policy for QoS class allocation, it is necessary to adaptively allocate the optimum QoS class based on estimation of the QoS class allocation policies in other domains. This paper proposes two kinds of adaptive QoS class allocation schemes, assuming that the arriving inter-domain path requests include the number of downstream domains traversed by the inter-domain paths and the remaining QoS value toward the destination nodes. First, a measurement-based scheme, based on measurement of the loss rates of inter-domain paths in the downstream domains, is proposed. This scheme estimates the QoS class allocation policies in the downstream domains, using the measured loss rates of path requests. Second, a state-dependent type scheme, based on measurement of the arrival rates of path requests in addition to the loss rates of paths in the downstream domains, is also proposed. This scheme allows an appropriate QoS class to be allocated according to the domain state. This paper proposes an application of the Markov decision theory to the modeling of state-dependent type scheme. The performances of the proposed schemes are evaluated and compared with those of the other less complicated non-adaptive schemes using a computer simulation. The results of the comparison reveal that the proposed schemes can adaptively increase the number of inter-domain paths accommodated in the considered domain, even when the QoS class allocation policies change in the other domains and the arrival pattern of path requests varies in the considered domain.

  14. Kinematic dynamos using constrained transport with high order Godunov schemes and adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Teyssier, Romain; Fromang, Sébastien; Dormy, Emmanuel

    2006-10-01

    We propose to extend the well-known MUSCL-Hancock scheme for Euler equations to the induction equation modeling the magnetic field evolution in kinematic dynamo problems. The scheme is based on an integral form of the underlying conservation law which, in our formulation, results in a “finite-surface” scheme for the induction equation. This naturally leads to the well-known “constrained transport” method, with additional continuity requirement on the magnetic field representation. The second ingredient in the MUSCL scheme is the predictor step that ensures second order accuracy both in space and time. We explore specific constraints that the mathematical properties of the induction equations place on this predictor step, showing that three possible variants can be considered. We show that the most aggressive formulations (referred to as C-MUSCL and U-MUSCL) reach the same level of accuracy as the other one (referred to as Runge Kutta), at a lower computational cost. More interestingly, these two schemes are compatible with the adaptive mesh refinement (AMR) framework. It has been implemented in the AMR code RAMSES. It offers a novel and efficient implementation of a second order scheme for the induction equation. We have tested it by solving two kinematic dynamo problems in the low diffusion limit. The construction of this scheme for the induction equation constitutes a step towards solving the full MHD set of equations using an extension of our current methodology.

  15. Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow

    NASA Astrophysics Data System (ADS)

    Wood, William Alfred, III

    production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.

  16. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model

  17. An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490

  18. Efficient, adaptive energy stable schemes for the incompressible Cahn-Hilliard Navier-Stokes phase-field models

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Shen, Jie

    2016-03-01

    In this paper we develop a fully adaptive energy stable scheme for Cahn-Hilliard Navier-Stokes system, which is a phase-field model for two-phase incompressible flows, consisting a Cahn-Hilliard-type diffusion equation and a Navier-Stokes equation. This scheme, which is decoupled and unconditionally energy stable based on stabilization, involves adaptive mesh, adaptive time and a nonlinear multigrid finite difference method. Numerical experiments are carried out to validate the scheme for problems with matched density and non-matched density, and also demonstrate that CPU time can be significantly reduced with our adaptive approach.

  19. Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  20. Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  1. LPTA: Location Predictive and Time Adaptive Data Gathering Scheme with Mobile Sink for Wireless Sensor Networks

    PubMed Central

    Rodrigues, Joel J. P. C.

    2014-01-01

    This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes. PMID:25302327

  2. High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs

    NASA Technical Reports Server (NTRS)

    Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.

    2014-01-01

    This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.

  3. Time step and shadow Hamiltonian in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Kim, Sangrak

    2015-08-01

    We examine the time step and the shadow Hamiltonian of symplectic algorithms for a bound system of a simple harmonic oscillator as a specific example. The phase space trajectory moves on the hyperplane of a constant shadow Hamiltonian. We find a stationary condition for the time step τ n with which the motion repeats itself on the phase space with a period n. Interestingly, that the time steps satisfying the stationary condition turn out to be independent of the symplectic algorithms chosen. Furthermore, the phase volume enclosed by the phase trajectory is given by n τ n Ẽ n , where Ẽ n is the initial shadow energy of the corresponding symplectic algorithm.

  4. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  5. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  6. Novel calibration and color adaptation schemes in three-fringe RGB photoelasticity

    NASA Astrophysics Data System (ADS)

    Swain, Digendranath; Thomas, Binu P.; Philip, Jeby; Pillai, S. Annamala

    2015-03-01

    Isochromatic demodulation in digital photoelasticity using RGB calibration is a two step process. The first step involves the construction of a look-up table (LUT) from a calibration experiment. In the second step, isochromatic data is demodulated by matching the colors of an analysis image with the colors existing in the LUT. As actual test and calibration experiment tint conditions vary due to different sources, color adaptation techniques for modifying an existing primary LUT are employed. However, the primary LUT is still generated from bending experiments. In this paper, RGB demodulation based on a theoretically constructed LUT has been attempted to exploit the advantages of color adaptation schemes. Thereby, the experimental mode of LUT generation and some uncertainties therein can be minimized. Additionally, a new color adaptation algorithm is proposed using quadratic Lagrangian interpolation polynomials, which is numerically better than the two-point linear interpolations available in the literature. The new calibration and color adaptation schemes are validated and applied to demodulate fringe orders in live models and stress frozen slices.

  7. On-line Adaptive and Intelligent Distance Relaying Scheme for Power Network

    NASA Astrophysics Data System (ADS)

    Dubey, Rahul; Samantaray, S. R.; Panigrahi, B. K.; Venkoparao, G. V.

    2015-10-01

    The paper presents an on-line sequential extreme learning machine (OS-ELM) based fast and accurate adaptive distance relaying scheme (ADRS) for transmission line protection. The proposed method develops an adaptive relay characteristics suitable to the changes in the physical conditions of the power systems. This can efficiently update the trained model on-line by partial training on the new data to reduce the model updating time whenever a new special case occurs. The effectiveness of the proposed method is validated on simulation platform for test system with two terminal parallel transmission lines with complex mutual coupling. The test results, considering wide variations in operating conditions of the faulted power network, indicate that the proposed adaptive relay setting provides significant improvement in the relay performance.

  8. Application of a solution adaptive grid scheme, SAGE, to complex three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1991-01-01

    A new three-dimensional (3D) adaptive grid code based on the algebraic, solution-adaptive scheme of Nakahashi and Deiwert is developed and applied to a variety of problems. The new computer code, SAGE, is an extension of the same-named two-dimensional (2D) solution-adaptive program that has already proven to be a powerful tool in computational fluid dynamics applications. The new code has been applied to a range of complex three-dimensional, supersonic and hypersonic flows. Examples discussed are a tandem-slot fuel injector, the hypersonic forebody of the Aeroassist Flight Experiment (AFE), the 3D base flow behind the AFE, the supersonic flow around a 3D swept ramp and a generic, hypersonic, 3D nozzle-plume flow. The associated adapted grids and the solution enhancements resulting from the grid adaption are presented for these cases. Three-dimensional adaption is more complex than its 2D counterpart, and the complexities unique to the 3D problems are discussed.

  9. Causal-Path Local Time-Stepping in the discontinuous Galerkin method for Maxwell's equations

    NASA Astrophysics Data System (ADS)

    Angulo, L. D.; Alvarez, J.; Teixeira, F. L.; Pantoja, M. F.; Garcia, S. G.

    2014-01-01

    We introduce a novel local time-stepping technique for marching-in-time algorithms. The technique is denoted as Causal-Path Local Time-Stepping (CPLTS) and it is applied for two time integration techniques: fourth-order low-storage explicit Runge-Kutta (LSERK4) and second-order Leap-Frog (LF2). The CPLTS method is applied to evolve Maxwell's curl equations using a Discontinuous Galerkin (DG) scheme for the spatial discretization. Numerical results for LF2 and LSERK4 are compared with analytical solutions and the Montseny's LF2 technique. The results show that the CPLTS technique improves the dispersive and dissipative properties of LF2-LTS scheme.

  10. An adaptive error modeling scheme for the lossless compression of EEG signals.

    PubMed

    Sriraam, N; Eswaran, C

    2008-09-01

    Lossless compression of EEG signal is of great importance for the neurological diagnosis as the specialists consider the exact reconstruction of the signal as a primary requirement. This paper discusses a lossless compression scheme for EEG signals that involves a predictor and an adaptive error modeling technique. The prediction residues are arranged based on the error count through an histogram computation. Two optimal regions are identified in the histogram plot through a heuristic search such that the bit requirement for encoding the two regions is minimum. Further improvement in the compression is achieved by removing the statistical redundancy that is present in the residue signal by using a context-based bias cancellation scheme. Three neural network predictors, namely, single-layer perceptron, multilayer perceptron, and Elman network and two linear predictors, namely, autoregressive model and finite impulse response filter are considered. Experiments are conducted using EEG signals recorded under different physiological conditions and the performances of the proposed methods are evaluated in terms of the compression ratio. It is shown that the proposed adaptive error modeling schemes yield better compression results compared to other known compression methods. PMID:18779073

  11. A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs.

    PubMed

    Liu, Anfeng; Liu, Xiao; Long, Jun

    2016-01-01

    Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566

  12. A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs

    PubMed Central

    Liu, Anfeng; Liu, Xiao; Long, Jun

    2016-01-01

    Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566

  13. Adaptive two-stage Karhunen-Loeve-transform scheme for spectral decorrelation in hyperspectral bandwidth compression

    NASA Astrophysics Data System (ADS)

    Saghri, John A.

    2010-05-01

    A computationally efficient adaptive two-stage Karhunen-Loeve transform (KLT) scheme for spectral decorrelation in hyperspectral lossy bandwidth compression is presented. The component decorrelation of the JPEG 2000 (extension 2) is replaced with an adaptive two-stage KLT scheme. The data are partitioned into small subsets. The spectral correlation within each partition is removed via a first-stage KLT. The interpartition spectral correlation is removed using a second-stage KLT applied to the resulting top few sets of equilevel principal component (PC) images. Since only a fraction of each equilevel first-stage PC images are used in the second stage, the KLT transformation matrices will have smaller sizes, leading to further improvement in computational complexity and coding efficiency. The computation of the proposed approach is parametrically quantified. It is shown that reconstructed image quality, as measured via statistical and/or machine-based exploitation measures, is improved by using a smaller partition size in the first-stage KLT. A criterion based on the components of the eigenvectors of the cross-covariance matrix is established to select first-stage PC images, which are used in the second-stage KLT. The proposed scheme also reduces the overhead bits required to transmit the covariance information to the receiver in conjunction with the coding bitstream.

  14. An Adaptive Loss-Aware Flow Control Scheme for Delay-Sensitive Applications in OBS Networks

    NASA Astrophysics Data System (ADS)

    Jeong, Hongkyu; Choi, Jungyul; Mo, Jeonghoon; Kang, Minho

    Optical Burst Switching (OBS) is one of the most promising switching technologies for next generation optical networks. As delay-sensitive applications such as Voice-over-IP (VoIP) have recently become popular, OBS networks should guarantee stringent Quality of Service (QoS) requirements for such applications. Thus, this paper proposes an Adaptive Loss-aware Flow Control (ALFC) scheme, which adaptively decides on the burst offset time based on loss-rate information delivered from core nodes for assigning a high priority to delay-sensitive application traffic. The proposed ALFC scheme also controls the upper-bounds of the factors inducing delay and jitter for guaranteeing the delay and jitter requirements of delay-sensitive application traffic. Moreover, a piggybacking method used in the proposed scheme accelerates the guarantee of the loss, delay, and jitter requirements because the response time for flow control can be extremely reduced up to a quarter of the Round Trip Time (RTT) on average while minimizing the signaling overhead. Simulation results show that our mechanism can guarantee a 10-3 loss-rate under any traffic load while offering satisfactory levels of delay and jitter for delay-sensitive applications.

  15. A high order Godunov scheme with constrained transport and adaptive mesh refinement for astrophysical magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Fromang, S.; Hennebelle, P.; Teyssier, R.

    2006-10-01

    Aims. In this paper, we present a new method to perform numerical simulations of astrophysical MHD flows using the Adaptive Mesh Refinement framework and Constrained Transport. Methods: . The algorithm is based on a previous work in which the MUSCL-Hancock scheme was used to evolve the induction equation. In this paper, we detail the extension of this scheme to the full MHD equations and discuss its properties. Results: . Through a series of test problems, we illustrate the performances of this new code using two different MHD Riemann solvers (Lax-Friedrich and Roe) and the need of the Adaptive Mesh Refinement capabilities in some cases. Finally, we show its versatility by applying it to two completely different astrophysical situations well studied in the past years: the growth of the magnetorotational instability in the shearing box and the collapse of magnetized cloud cores. Conclusions: . We have implemented a new Godunov scheme to solve the ideal MHD equations in the AMR code RAMSES. We have shown that it results in a powerful tool that can be applied to a great variety of astrophysical problems, ranging from galaxies formation in the early universe to high resolution studies of molecular cloud collapse in our galaxy.

  16. Nonorthogonal CSK/CDMA with Received-Power Adaptive Access Control Scheme

    NASA Astrophysics Data System (ADS)

    Komuro, Nobuyoshi; Habuchi, Hiromasa; Tsuboi, Toshinori

    The measurements for Multiple Access Interference (MAI) problems and the improvement of the data rate are key issues on the advanced wireless networks. In this paper, the nonorthogonal Code Shift Keying Code Division Multiple Access (CSK/CDMA) with received-power adaptive access control scheme is proposed. In our system, a user who is ready to send measures the received power from other users, and then the user decides whether to transmit or refrain from transmission according to the received power and a pre-decided threshold. Not only overcoming the MAI problems, but our system also improve the throughput performance. The throughput performance of the proposed system is evaluated by theoretical analysis. Consequently, the nonorthogonal CSK/CDMA system improves by applying received-power adaptive access control. It was also found that the throughput performance of the nonorthogonal CSK/CDMA system is better than that of the orthogonal CSK/CDMA system at any Eb/N0. We conclude that the nonorthogonal CSK/CDMA system with received-power adaptive access control scheme is expected to be effective in advanced wireless networks.

  17. Wavelet approximation of correlated wave functions. II. Hyperbolic wavelets and adaptive approximation schemes

    NASA Astrophysics Data System (ADS)

    Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas

    2002-08-01

    We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.

  18. Development of a solution adaptive unstructured scheme for quasi-3D inviscid flows through advanced turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Usab, William J., Jr.; Jiang, Yi-Tsann

    1991-01-01

    The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.

  19. Modeling solute transport in distribution networks with variable demand and time step sizes.

    SciTech Connect

    Peyton, Chad E.; Bilisoly, Roger Lee; Buchberger, Steven G.; McKenna, Sean Andrew; Yarrington, Lane

    2004-06-01

    The effect of variable demands at short time scales on the transport of a solute through a water distribution network has not previously been studied. We simulate flow and transport in a small water distribution network using EPANET to explore the effect of variable demand on solute transport across a range of hydraulic time step scales from 1 minute to 2 hours. We show that variable demands at short time scales can have the following effects: smoothing of a pulse of tracer injected into a distribution network and increasing the variability of both the transport pathway and transport timing through the network. Variable demands are simulated for these different time step sizes using a previously developed Poisson rectangular pulse (PRP) demand generator that considers demand at a node to be a combination of exponentially distributed arrival times with log-normally distributed intensities and durations. Solute is introduced at a tank and at three different network nodes and concentrations are modeled through the system using the Lagrangian transport scheme within EPANET. The transport equations within EPANET assume perfect mixing of the solute within a parcel of water and therefore physical dispersion cannot occur. However, variation in demands along the solute transport path contribute to both removal and distortion of the injected pulse. The model performance measures examined are the distribution of the Reynolds number, the variation in the center of mass of the solute across time, and the transport path and timing of the solute through the network. Variation in all three performance measures is greatest at the shortest time step sizes. As the scale of the time step increases, the variability in these performance measures decreases. The largest time steps produce results that are inconsistent with the results produced by the smaller time steps.

  20. Designing Adaptive Low-Dissipative High Order Schemes for Long-Time Integrations. Chapter 1

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sjoegreen, B.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.

  1. Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.

  2. Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas

    SciTech Connect

    Cohen, B I; Dimits, A; Friedman, A; Caflisch, R

    2009-10-29

    The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.

  3. The adaptive GRP scheme for compressible fluid flows over unstructured meshes

    NASA Astrophysics Data System (ADS)

    Li, Jiequan; Zhang, Yongjin

    2013-06-01

    Unstructured mesh methods have attracted much attention in CFD community due to the flexibility for dealing with complex geometries and the ability to easily incorporate adaptive (moving) mesh strategies. When the finite volume framework is applied, a reliable solver is crucial for the construction of numerical fluxes, for which the generalized Riemann problem (GRP) scheme undertakes such a task in the sense of second order accuracy. Combining these techniques yields a second order accurate adaptive generalized Riemann problem (AGRP) scheme for two dimensional compressible fluid flows over unstructured triangular meshes. Besides the generation of meshes, the main process of this combination consists of two ingredients: Fluid dynamical evolution and mesh redistribution. The fluid dynamical evolution ingredient serves to evolve the compressible fluid flows on a fixed nonuniform triangular mesh with the direct Eulerian GRP solver. The role of the mesh redistribution is to redistribute mesh points on which a conservative interpolation formula is adopted to calculate the cell-averages for the conservative variables, and the gradients of primitive variables are reconstructed using the least squares method. Several examples are taken from various contexts to demonstrate the performance of such a program.

  4. An adaptive high-order hybrid scheme for compressive, viscous flows with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Ziegler, Jack L.; Deiterding, Ralf; Shepherd, Joseph E.; Pullin, D. I.

    2011-08-01

    A hybrid weighted essentially non-oscillatory (WENO)/centered-difference numerical method, with low numerical dissipation, high-order shock-capturing, and structured adaptive mesh refinement (SAMR), has been developed for the direct numerical simulation of the multicomponent, compressible, reactive Navier-Stokes equations. The method enables accurate resolution of diffusive processes within reaction zones. The approach combines time-split reactive source terms with a high-order, shock-capturing scheme specifically designed for diffusive flows. A description of the order-optimized, symmetric, finite difference, flux-based, hybrid WENO/centered-difference scheme is given, along with its implementation in a high-order SAMR framework. The implementation of new techniques for discontinuity flagging, scheme-switching, and high-order prolongation and restriction is described. In particular, the refined methodology does not require upwinded WENO at grid refinement interfaces for stability, allowing high-order prolongation and thereby eliminating a significant source of numerical diffusion within the overall code performance. A series of one-and two-dimensional test problems is used to verify the implementation, specifically the high-order accuracy of the diffusion terms. One-dimensional benchmarks include a viscous shock wave and a laminar flame. In two-space dimensions, a Lamb-Oseen vortex and an unstable diffusive detonation are considered, for which quantitative convergence is demonstrated. Further, a two-dimensional high-resolution simulation of a reactive Mach reflection phenomenon with diffusive multi-species mixing is presented.

  5. Dynamic adaptive chemistry with operator splitting schemes for reactive flow simulations

    NASA Astrophysics Data System (ADS)

    Ren, Zhuyin; Xu, Chao; Lu, Tianfeng; Singer, Michael A.

    2014-04-01

    A numerical technique that uses dynamic adaptive chemistry (DAC) with operator splitting schemes to solve the equations governing reactive flows is developed and demonstrated. Strang-based splitting schemes are used to separate the governing equations into transport fractional substeps and chemical reaction fractional substeps. The DAC method expedites the numerical integration of reaction fractional substeps by using locally valid skeletal mechanisms that are obtained using the directed relation graph (DRG) reduction method to eliminate unimportant species and reactions from the full mechanism. Second-order temporal accuracy of the Strang-based splitting schemes with DAC is demonstrated on one-dimensional, unsteady, freely-propagating, premixed methane/air laminar flames with detailed chemical kinetics and realistic transport. The use of DAC dramatically reduces the CPU time required to perform the simulation, and there is minimal impact on solution accuracy. It is shown that with DAC the starting species and resulting skeletal mechanisms strongly depend on the local composition in the flames. In addition, the number of retained species may be significant only near the flame front region where chemical reactions are significant. For the one-dimensional methane/air flame considered, speed-up factors of three and five are achieved over the entire simulation for GRI-Mech 3.0 and USC-Mech II, respectively. Greater speed-up factors are expected for larger chemical kinetics mechanisms.

  6. Scheduling and adaptation of London's future water supply and demand schemes under uncertainty

    NASA Astrophysics Data System (ADS)

    Huskova, Ivana; Matrosov, Evgenii S.; Harou, Julien J.; Kasprzyk, Joseph R.; Reed, Patrick M.

    2015-04-01

    The changing needs of society and the uncertainty of future conditions complicate the planning of future water infrastructure and its operating policies. These systems must meet the multi-sector demands of a range of stakeholders whose objectives often conflict. Understanding these conflicts requires exploring many alternative plans to identify possible compromise solutions and important system trade-offs. The uncertainties associated with future conditions such as climate change and population growth challenge the decision making process. Ideally planners should consider portfolios of supply and demand management schemes represented as dynamic trajectories over time able to adapt to the changing environment whilst considering many system goals and plausible futures. Decisions can be scheduled and adapted over the planning period to minimize the present cost of portfolios while maintaining the supply-demand balance and ecosystem services as the future unfolds. Yet such plans are difficult to identify due to the large number of alternative plans to choose from, the uncertainty of future conditions and the computational complexity of such problems. Our study optimizes London's future water supply system investments as well as their scheduling and adaptation over time using many-objective scenario optimization, an efficient water resource system simulator, and visual analytics for exploring key system trade-offs. The solutions are compared to Pareto approximate portfolios obtained from previous work where the composition of infrastructure portfolios that did not change over the planning period. We explore how the visual analysis of solutions can aid decision making by investigating the implied performance trade-offs and how the individual schemes and their trajectories present in the Pareto approximate portfolios affect the system's behaviour. By doing so decision makers are given the opportunity to decide the balance between many system goals a posteriori as well as

  7. Short-term Time Step Convergence in a Climate Model

    SciTech Connect

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; Jablonowski, Christiane

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to the expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.

  8. Obtaining Runge-Kutta Solutions Between Time Steps

    NASA Technical Reports Server (NTRS)

    Horn, M. K.

    1984-01-01

    New interpolation method used with existing Runge-Kutta algorithms. Algorithm evaluates solution at intermediate point within integration step. Only few additional computations required to produce intermediate solution data. Runge-Kutta method provides accurate solution with larger time steps than allowable in other methods.

  9. Short-term Time Step Convergence in a Climate Model

    DOE PAGESBeta

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; Jablonowski, Christiane

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  10. Adaptive finite-volume WENO schemes on dynamically redistributed grids for compressible Euler equations

    NASA Astrophysics Data System (ADS)

    Pathak, Harshavardhana S.; Shukla, Ratnesh K.

    2016-08-01

    A high-order adaptive finite-volume method is presented for simulating inviscid compressible flows on time-dependent redistributed grids. The method achieves dynamic adaptation through a combination of time-dependent mesh node clustering in regions characterized by strong solution gradients and an optimal selection of the order of accuracy and the associated reconstruction stencil in a conservative finite-volume framework. This combined approach maximizes spatial resolution in discontinuous regions that require low-order approximations for oscillation-free shock capturing. Over smooth regions, high-order discretization through finite-volume WENO schemes minimizes numerical dissipation and provides excellent resolution of intricate flow features. The method including the moving mesh equations and the compressible flow solver is formulated entirely on a transformed time-independent computational domain discretized using a simple uniform Cartesian mesh. Approximations for the metric terms that enforce discrete geometric conservation law while preserving the fourth-order accuracy of the two-point Gaussian quadrature rule are developed. Spurious Cartesian grid induced shock instabilities such as carbuncles that feature in a local one-dimensional contact capturing treatment along the cell face normals are effectively eliminated through upwind flux calculation using a rotated Hartex-Lax-van Leer contact resolving (HLLC) approximate Riemann solver for the Euler equations in generalized coordinates. Numerical experiments with the fifth and ninth-order WENO reconstructions at the two-point Gaussian quadrature nodes, over a range of challenging test cases, indicate that the redistributed mesh effectively adapts to the dynamic flow gradients thereby improving the solution accuracy substantially even when the initial starting mesh is non-adaptive. The high adaptivity combined with the fifth and especially the ninth-order WENO reconstruction allows remarkably sharp capture of

  11. Dependence of aqua-planet simulations on time step

    NASA Astrophysics Data System (ADS)

    Williamson, David L.; Olson, Jerry G.

    2003-04-01

    Aqua-planet simulations with Eulerian and semi-Lagrangian dynamical cores coupled to the NCAR CCM3 parametrization suite produce very different zonal average precipitation patterns. The model with the Eulerian core forms a narrow single precipitation peak centred on the sea surface temperature (SST) maximum. The one with the semi-Lagrangian core forms a broad structure often with a double peak straddling the SST maximum with a precipitation minimum centred on the SST maximum. The different structure is shown to be caused primarily by the different time step adopted by each core and its effect on the parametrizations rather than by different truncation errors introduced by the dynamical cores themselves. With a longer discrete time step, the surface exchange parametrization deposits more moisture in the atmosphere in a single time step, resulting in convection being initiated farther from the equator, closer to the maximum source. Different diffusive smoothing associated with different spectral resolutions is a secondary effect influencing the strength of the double structure. When the semi-Lagrangian core is configured to match the Eulerian with the same time step, a three-time-level formulation and same spectral truncation it produces precipitation fields similar to those from the Eulerian. It is argued that the broad and double structure forms in this model with the longer time step because more water is put into the atmosphere over a longer discrete time step, the evaporation rate being the same. The additional water vapour in the region of equatorial moisture convergence results in more convective available potential energy farther from the equator which allows convection to initiate farther from the equator.The resulting heating drives upward vertical motion and low-level convergence away from the equator, resulting in much weaker upward motion at the equator. The feedback between the convective heating and dynamics reduces the instability at the equator and

  12. Development of a variable time-step transient NEW code: SPANDEX

    SciTech Connect

    Aviles, B.N. )

    1993-01-01

    This paper describes a three-dimensional, variable time-step transient multigroup diffusion theory code, SPANDEX (space-time nodal expansion method). SPANDEX is based on the static nodal expansion method (NEM) code, NODEX (Ref. 1), and employs a nonlinear algorithm and a fifth-order expansion of the transverse-integrated fluxes. The time integration scheme in SPANDEX is a fourth-order implicit generalized Runge-Kutta method (GRK) with on-line error control and variable time-step selection. This Runge-Kutta method has been applied previously to point kinetics and one-dimensional finite difference transient analysis. This paper describes the application of the Runge-Kutta method to three-dimensional reactor transient analysis in a multigroup NEM code.

  13. The multiple time step r-RESPA procedure and polarizable potentials based on induced dipole moments

    NASA Astrophysics Data System (ADS)

    Masella, Michel

    In the present study, we present an accelerating scheme based on the reversible multiple time step r-RESPA method to be used in molecular dynamics simulations with polarizable potentials based on induced dipole moments. Even if the induced dipoles are estimated with an iterative self-consistent procedure, this scheme significantly reduces the CPU time needed to perform a molecular dynamics simulation, up to a factor 2, as compared to the Car-Parrinello method where additional dynamical variables are introduced for the treatment of the induced dipoles. The tests show that stable and reliable molecular dynamics trajectories can be generated with that scheme, and that the physical properties derived from the trajectories are equivalent to those computed with the classical all atom iterative approach and the Car-Parrinello one.

  14. Fast adaptive schemes for tracking voltage phasor and local frequency in power transmission and distribution systems

    SciTech Connect

    Kamwa, I.; Grondin, R. )

    1992-04-01

    Real-time measurements of voltage phasor and local frequency deviation find applications in computer-based relaying, static state estimation, disturbance monitoring and control. This paper proposes two learning schemes for fast estimation of these basic quantities. We attacked the problem from a system identification perspective, in opposition to the well-established Extended Kalman Filtering (EKF) technique. It is shown that, from a simple non-linear model of the system voltage which involves only two parameters, the Recursive Least Squares (RLS) and the Least Means Squares (LMS) algorithms can each provide dynamic estimates of the voltage phasor. The finite derivative of the phase deviation, followed by a moving-average filter, then leads to the local frequency deviation. A constant forgetting factor included in these algorithms provides both fast adaptation in time-varying situations and good smoothing of the estimates when necessary.

  15. Astrophysical hydrodynamics with a high-order discontinuous Galerkin scheme and adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Schaal, Kevin; Bauer, Andreas; Chandrashekar, Praveen; Pakmor, Rüdiger; Klingenberg, Christian; Springel, Volker

    2015-11-01

    Solving the Euler equations of ideal hydrodynamics as accurately and efficiently as possible is a key requirement in many astrophysical simulations. It is therefore important to continuously advance the numerical methods implemented in current astrophysical codes, especially also in light of evolving computer technology, which favours certain computational approaches over others. Here we introduce the new adaptive mesh refinement (AMR) code TENET, which employs a high-order discontinuous Galerkin (DG) scheme for hydrodynamics. The Euler equations in this method are solved in a weak formulation with a polynomial basis by means of explicit Runge-Kutta time integration and Gauss-Legendre quadrature. This approach offers significant advantages over commonly employed second-order finite-volume (FV) solvers. In particular, the higher order capability renders it computationally more efficient, in the sense that the same precision can be obtained at significantly less computational cost. Also, the DG scheme inherently conserves angular momentum in regions where no limiting takes place, and it typically produces much smaller numerical diffusion and advection errors than an FV approach. A further advantage lies in a more natural handling of AMR refinement boundaries, where a fall-back to first order can be avoided. Finally, DG requires no wide stencils at high order, and offers an improved data locality and a focus on local computations, which is favourable for current and upcoming highly parallel supercomputers. We describe the formulation and implementation details of our new code, and demonstrate its performance and accuracy with a set of two- and three-dimensional test problems. The results confirm that DG schemes have a high potential for astrophysical applications.

  16. Adaptive Numerical Dissipation Control in High Order Schemes for Multi-D Non-Ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.

    2005-01-01

    The required type and amount of numerical dissipation/filter to accurately resolve all relevant multiscales of complex MHD unsteady high-speed shock/shear/turbulence/combustion problems are not only physical problem dependent, but also vary from one flow region to another. In addition, proper and efficient control of the divergence of the magnetic field (Div(B)) numerical error for high order shock-capturing methods poses extra requirements for the considered type of CPU intensive computations. The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multiresolution wavelets (WAV) (for the above types of flow feature). These filters also provide a natural and efficient way for the minimization of Div(B) numerical error.

  17. A General Hybrid Radiation Transport Scheme for Star Formation Simulations on an Adaptive Grid

    NASA Astrophysics Data System (ADS)

    Klassen, Mikhail; Kuiper, Rolf; Pudritz, Ralph E.; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-01

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  18. A general hybrid radiation transport scheme for star formation simulations on an adaptive grid

    SciTech Connect

    Klassen, Mikhail; Pudritz, Ralph E.; Kuiper, Rolf; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-10

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  19. An Adaptively-Refined, Cartesian, Cell-Based Scheme for the Euler and Navier-Stokes Equations. Ph.D. Thesis - Michigan Univ.

    NASA Technical Reports Server (NTRS)

    Coirier, William John

    1994-01-01

    A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a

  20. Hierarchical adaptation scheme for multiagent data fusion and resource management in situation analysis

    NASA Astrophysics Data System (ADS)

    Benaskeur, Abder R.; Roy, Jean

    2001-08-01

    Sensor Management (SM) has to do with how to best manage, coordinate and organize the use of sensing resources in a manner that synergistically improves the process of data fusion. Based on the contextual information, SM develops options for collecting further information, allocates and directs the sensors towards the achievement of the mission goals and/or tunes the parameters for the realtime improvement of the effectiveness of the sensing process. Conscious of the important role that SM has to play in modern data fusion systems, we are currently studying advanced SM Concepts that would help increase the survivability of the current Halifax and Iroquois Class ships, as well as their possible future upgrades. For this purpose, a hierarchical scheme has been proposed for data fusion and resource management adaptation, based on the control theory and within the process refinement paradigm of the JDL data fusion model, and taking into account the multi-agent model put forward by the SASS Group for the situation analysis process. The novelty of this work lies in the unified framework that has been defined for tackling the adaptation of both the fusion process and the sensor/weapon management.

  1. A self-adaptive memeplexes robust search scheme for solving stochastic demands vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Chen, Xianshun; Feng, Liang; Ong, Yew Soon

    2012-07-01

    In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.

  2. Schwarz type domain decomposition and subcycling multi-time step approach for solving Richards equation

    NASA Astrophysics Data System (ADS)

    Kuraz, Michal

    2016-06-01

    Modelling the transport processes in a vadose zone, e.g. modelling contaminant transport or the effect of the soil water regime on changes in soil structure and composition, plays an important role in predicting the reactions of soil biotopes to anthropogenic activity. Water flow is governed by the quasilinear Richards equation. The paper concerns the implementation of a multi-time-step approach for solving a nonlinear Richards equation. When modelling porous media flow with a Richards equation, due to a possible convection dominance and a convergence of a nonlinear solver, a stable finite element approximation requires accurate temporal and spatial integration. The method presented here enables adaptive domain decomposition algorithm together with a multi-time-step treatment of actively changing subdomains.

  3. A Dynamic Era-Based Time-Symmetric Block Time-Step Algorithm with Parallel Implementations

    NASA Astrophysics Data System (ADS)

    Kaplan, Murat; Saygin, Hasan

    2012-06-01

    The time-symmetric block time-step (TSBTS) algorithm is a newly developed efficient scheme for N-body integrations. It is constructed on an era-based iteration. In this work, we re-designed the TSBTS integration scheme with a dynamically changing era size. A number of numerical tests were performed to show the importance of choosing the size of the era, especially for long-time integrations. Our second aim was to show that the TSBTS scheme is as suitable as previously known schemes for developing parallel N-body codes. In this work, we relied on a parallel scheme using the copy algorithm for the time-symmetric scheme. We implemented a hybrid of data and task parallelization for force calculation to handle load balancing problems that can appear in practice. Using the Plummer model initial conditions for different numbers of particles, we obtained the expected efficiency and speedup for a small number of particles. Although parallelization of the direct N-body codes is negatively affected by the communication/calculation ratios, we obtained good load-balanced results. Moreover, we were able to conserve the advantages of the algorithm (e.g., energy conservation for long-term simulations).

  4. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    PubMed

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  5. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  6. Inertial stochastic dynamics. I. Long-time-step methods for Langevin dynamics

    NASA Astrophysics Data System (ADS)

    Beard, Daniel A.; Schlick, Tamar

    2000-05-01

    Two algorithms are presented for integrating the Langevin dynamics equation with long numerical time steps while treating the mass terms as finite. The development of these methods is motivated by the need for accurate methods for simulating slow processes in polymer systems such as two-site intermolecular distances in supercoiled DNA, which evolve over the time scale of milliseconds. Our new approaches refine the common Brownian dynamics (BD) scheme, which approximates the Langevin equation in the highly damped diffusive limit. Our LTID ("long-time-step inertial dynamics") method is based on an eigenmode decomposition of the friction tensor. The less costly integrator IBD ("inertial Brownian dynamics") modifies the usual BD algorithm by the addition of a mass-dependent correction term. To validate the methods, we evaluate the accuracy of LTID and IBD and compare their behavior to that of BD for the simple example of a harmonic oscillator. We find that the LTID method produces the expected correlation structure for Langevin dynamics regardless of the level of damping. In fact, LTID is the only consistent method among the three, with error vanishing as the time step approaches zero. In contrast, BD is accurate only for highly overdamped systems. For cases of moderate overdamping, and for the appropriate choice of time step, IBD is significantly more accurate than BD. IBD is also less computationally expensive than LTID (though both are the same order of complexity as BD), and thus can be applied to simulate systems of size and time scale ranges previously accessible to only the usual BD approach. Such simulations are discussed in our companion paper, for long DNA molecules modeled as wormlike chains.

  7. A multiple time stepping algorithm for efficient multiscale modeling of platelets flowing in blood plasma

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny

    2015-03-01

    We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.

  8. A Multiple Time Stepping Algorithm for Efficient Multiscale Modeling of Platelets Flowing in Blood Plasma

    PubMed Central

    Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny

    2015-01-01

    We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3–4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations. PMID:25641983

  9. Adaptive modulation and intra-symbol frequency-domain averaging scheme for multiband OFDM UWB over fiber system

    NASA Astrophysics Data System (ADS)

    He, Jing; Li, Teng; Wen, Xuejie; Deng, Rui; Chen, Ming; Chen, Lin

    2016-01-01

    To overcome the unbalanced error bit distribution among subcarriers caused by inter-subcarriers mixing interference (ISMI) and frequency selective fading (FSF), an adaptive modulation scheme based on 64/16/4QAM modulation is proposed and experimentally investigated in the intensity-modulation direct-detection (IM/DD) multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over fiber system. After 50 km standard single mode fiber (SSMF) transmission, at the bit error ratio (BER) of 1×10-3, the experimental results show that the power penalty of the IM/DD MB-OFDM UWBoF system with 64/16/4QAM adaptive modulation scheme is about 3.6 dB, compared to that with the 64QAM modulation scheme. Moreover, the receiver sensitivity has been improved about 0.52 dB when the intra-symbol frequency-domain averaging (ISFA) algorithm is employed in the IM/DD MB-OFDM UWBoF system based on the 64/16/4QAM adaptive modulation scheme. Meanwhile, after 50 km SSMF transmission, there is a negligible power penalty in the adaptively modulated IM/DD MB-OFDM UWBoF system, compared to the optical back-to-back case.

  10. Three dimensional adaptive meshing scheme applied to the control of the spatial representation of complex field pattern in electromagnetics

    NASA Astrophysics Data System (ADS)

    Grosges, T.; Borouchaki, H.; Barchiesi, D.

    2010-12-01

    We present an improved adaptive mesh process based on Riemannian transformation to control the accuracy in high field gradient representation for diffraction problems. Such an adaptive meshing is applied in representing the electromagnetic intensity around a metallic submicronic spherical particle, which is known to present high gradients in limited zones of space including the interference pattern of the electromagnetic field. We show that, the precision of the field variation being controlled, this improved scheme permits drastically decreasing the computational time as well as the memory requirements by adapting the number and the position of nodes where the electromagnetic field must be computed and represented.

  11. An Indirect Adaptive Control Scheme in the Presence of Actuator and Sensor Failures

    NASA Technical Reports Server (NTRS)

    Sun, Joy Z.; Josh, Suresh M.

    2009-01-01

    The problem of controlling a system in the presence of unknown actuator and sensor faults is addressed. The system is assumed to have groups of actuators, and groups of sensors, with each group consisting of multiple redundant similar actuators or sensors. The types of actuator faults considered consist of unknown actuators stuck in unknown positions, as well as reduced actuator effectiveness. The sensor faults considered include unknown biases and outages. The approach employed for fault detection and estimation consists of a bank of Kalman filters based on multiple models, and subsequent control reconfiguration to mitigate the effect of biases caused by failed components as well as to obtain stability and satisfactory performance using the remaining actuators and sensors. Conditions for fault identifiability are presented, and the adaptive scheme is applied to an aircraft flight control example in the presence of actuator failures. Simulation results demonstrate that the method can rapidly and accurately detect faults and estimate the fault values, thus enabling safe operation and acceptable performance in spite of failures.

  12. A dual adaptive watermarking scheme in contourlet domain for DICOM images

    PubMed Central

    2011-01-01

    Background Nowadays, medical imaging equipments produce digital form of medical images. In a modern health care environment, new systems such as PACS (picture archiving and communication systems), use the digital form of medical image too. The digital form of medical images has lots of advantages over its analog form such as ease in storage and transmission. Medical images in digital form must be stored in a secured environment to preserve patient privacy. It is also important to detect modifications on the image. These objectives are obtained by watermarking in medical image. Methods In this paper, we present a dual and oblivious (blind) watermarking scheme in the contourlet domain. Because of importance of ROI (region of interest) in interpretation by medical doctors rather than RONI (region of non-interest), we propose an adaptive dual watermarking scheme with different embedding strength in ROI and RONI. We embed watermark bits in singular value vectors of the embedded blocks within lowpass subband in contourlet domain. Results The values of PSNR (peak signal-to-noise ratio) and SSIM (structural similarity measure) index of ROI for proposed DICOM (digital imaging and communications in medicine) images in this paper are respectively larger than 64 and 0.997. These values confirm that our algorithm has good transparency. Because of different embedding strength, BER (bit error rate) values of signature watermark are less than BER values of caption watermark. Our results show that watermarked images in contourlet domain have greater robustness against attacks than wavelet domain. In addition, the qualitative analysis of our method shows it has good invisibility. Conclusions The proposed contourlet-based watermarking algorithm in this paper uses an automatically selection for ROI and embeds the watermark in the singular values of contourlet subbands that makes the algorithm more efficient, and robust against noise attacks than other transform domains. The embedded

  13. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering-CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes-MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  14. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  15. A massively parallel adaptive scheme for melt migration in geodynamics computations

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Heister, Timo; Grove, Ryan

    2016-04-01

    Melt generation and migration are important processes for the evolution of the Earth's interior and impact the global convection of the mantle. While they have been the subject of numerous investigations, the typical time and length-scales of melt transport are vastly different from global mantle convection, which determines where melt is generated. This makes it difficult to study mantle convection and melt migration in a unified framework. In addition, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. We describe our extension of the community mantle convection code ASPECT that adds equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects, and it incorporates the individual compressibilities of the solid and the fluid phase. For this, we derive an accurate and stable Finite Element scheme that can be combined with adaptive mesh refinement. This is particularly advantageous for this type of problem, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high resolution, 3d, compressible, global mantle convection simulations coupled with melt migration. Furthermore, scalable iterative linear solvers are required to solve the large linear systems arising from the discretized system. Finally, we present benchmarks and scaling tests of our solver up to tens of thousands of cores, show the effectiveness of adaptive mesh refinement when applied to melt migration and compare the

  16. Uncertainty Propagation and Quantification using Constrained Coupled Adaptive Forward-Inverse Schemes: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.

    2013-12-01

    equation for the distribution of k is solved, provided that Cauchy data are appropriately assigned. In the next stage, only a limited number of passive measurements are provided. In this case, the forward and inverse PDEs are solved simultaneously. This is accomplished by adding regularization terms and filtering the pressure gradients in the inverse problem. Both the forward and the inverse problem are either simultaneously or sequentially coupled and solved using implicit schemes, adaptive mesh refinement, Galerkin finite elements. The final case arises when P, k, and Q data only exist at producing wells. This exceedingly ill posed problem calls for additional constraints on the forward-inverse coupling to insure that the production rates are satisfied at the desired locations. Results from all three cases are presented demonstrating stability and accuracy of the proposed approach and, more importantly, providing some insights into the consequences of data under sampling, uncertainty propagation and quantification. We illustrate the advantages of this novel approach over the common UQ forward drivers on several subsurface energy problems in either porous or fractured or/and faulted reservoirs. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  17. Counterrotating prop-fan simulations which feature a relative-motion multiblock grid decomposition enabling arbitrary time-steps

    NASA Technical Reports Server (NTRS)

    Janus, J. Mark; Whitfield, David L.

    1990-01-01

    Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.

  18. Constrained Density Functional Theory by Imaginary Time-Step Method

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel

    Constrained Density Functional Theory (CDFT) has been a popular choice within the last decade for sidestepping the self interaction problem within long-range charge transfer calculations. Typically an inner constraint loop is added within the self-consistent field iterations of DFT in order to enforce this charge transfer state by means of a Lagrange multiplier method. In this work, an alternate implementation of CDFT is introduced, that of the imaginary time-step method, which lends itself more readily to real space calculations in the ability to solve numerically for 3D local external potentials which enforce arbitrary given densities. This method has been shown to reproduce the proper 1 / R dependence of charge transfer systems in real space calculations as well as the ability to generate useful constraint potentials. As an example application, this method is shown to be capable of describing defects within periodic systems using finite calculations by constraining the 3D density to that of the periodically calculated perfect system at the boundaries.

  19. Large-scale MOSFET and interconnect circuit simulation using waveform relaxation and transmission line time step control

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Jung; Chang, Allen Y.; Tsai, Chang-Lung; Lee, Chih-Jen; Chou, Li-Ping; Shin, Tien-Hao

    2012-04-01

    A modified Waveform Relaxation algorithm with transmission line calculation ability is proposed to perform large-scale circuit simulation for MOSFET circuits with lossy coupled transmission lines. The adopted full time-domain transmission line calculation algorithm, based on the Method of Characteristic, has been equipped with a time step control scheme to improve the calculation efficiency. All proposed methods have been implemented in a simulation program to simulate several circuits. The simulation results well justify the success of proposed methods.

  20. Operational flood control of a low-lying delta system using large time step Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Tian, Xin; van Overloop, Peter-Jules; Negenborn, Rudy R.; van de Giesen, Nick

    2015-01-01

    The safety of low-lying deltas is threatened not only by riverine flooding but by storm-induced coastal flooding as well. For the purpose of flood control, these deltas are mostly protected in a man-made environment, where dikes, dams and other adjustable infrastructures, such as gates, barriers and pumps are widely constructed. Instead of always reinforcing and heightening these structures, it is worth considering making the most of the existing infrastructure to reduce the damage and manage the delta in an operational and overall way. In this study, an advanced real-time control approach, Model Predictive Control, is proposed to operate these structures in the Dutch delta system (the Rhine-Meuse delta). The application covers non-linearity in the dynamic behavior of the water system and the structures. To deal with the non-linearity, a linearization scheme is applied which directly uses the gate height instead of the structure flow as the control variable. Given the fact that MPC needs to compute control actions in real-time, we address issues regarding computational time. A new large time step scheme is proposed in order to save computation time, in which different control variables can have different control time steps. Simulation experiments demonstrate that Model Predictive Control with the large time step setting is able to control a delta system better and much more efficiently than the conventional operational schemes.

  1. Multiple ``time step'' Monte Carlo simulations: Application to charged systems with Ewald summation

    NASA Astrophysics Data System (ADS)

    Bernacki, Katarzyna; Hetényi, Balázs; Berne, B. J.

    2004-07-01

    Recently, we have proposed an efficient scheme for Monte Carlo simulations, the multiple "time step" Monte Carlo (MTS-MC) [J. Chem. Phys. 117, 8203 (2002)] based on the separation of the potential interactions into two additive parts. In this paper, the structural and thermodynamic properties of the simple point charge water model combined with the Ewald sum are compared for the MTS-MC real-/reciprocal-space split of the Ewald summation and the common Metropolis Monte Carlo method. We report a number of observables as a function of CPU time calculated using MC and MTS-MC. The correlation functions indicate that speedups on the order of 4.5-7.5 can be obtained for systems of 108-500 waters for n=10 splitting parameter.

  2. Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario

    NASA Astrophysics Data System (ADS)

    Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.

    2009-12-01

    Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.

  3. A mesh adaptivity scheme on the Landau-de Gennes functional minimization case in 3D, and its driving efficiency

    NASA Astrophysics Data System (ADS)

    Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan

    2016-09-01

    This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.

  4. Design and implementation of adaptive PI control schemes for web tension control in roll-to-roll (R2R) manufacturing.

    PubMed

    Raul, Pramod R; Pagilla, Prabhakar R

    2015-05-01

    In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. PMID:25555757

  5. Adaptations in a Community-Based Family Intervention: Replication of Two Coding Schemes.

    PubMed

    Cooper, Brittany Rhoades; Shrestha, Gitanjali; Hyman, Leah; Hill, Laura

    2016-02-01

    Although program adaptation is a reality in community-based implementations of evidence-based programs, much of the discussion about adaptation remains theoretical. The primary aim of this study was to replicate two coding systems to examine adaptations in large-scale, community-based disseminations of the Strengthening Families Program for Parents and Youth 10-14, a family-based substance use prevention program. Our second aim was to explore intersections between various dimensions of facilitator-reported adaptations from these two coding systems. Our results indicate that only a few types of adaptations and a few reasons accounted for a majority (over 70 %) of all reported adaptations. We also found that most adaptations were logistical, reactive, and not aligned with program's goals. In many ways, our findings replicate those of the original studies, suggesting the two coding systems are robust even when applied to self-reported data collected from community-based implementations. Our findings on the associations between adaptation dimensions can inform future studies assessing the relationship between adaptations and program outcomes. Studies of local adaptations, like the present one, should help researchers, program developers, and policymakers better understand the issues faced by implementers and guide efforts related to program development, transferability, and sustainability. PMID:26661413

  6. An on-line equivalent system identification scheme for adaptive control. Ph.D. Thesis - Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1984-01-01

    A prime obstacle to the widespread use of adaptive control is the degradation of performance and possible instability resulting from the presence of unmodeled dynamics. The approach taken is to explicitly include the unstructured model uncertainty in the output error identification algorithm. The order of the compensator is successively increased by including identified modes. During this model building stage, heuristic rules are used to test for convergence prior to designing compensators. Additionally, the recursive identification algorithm as extended to multi-input, multi-output systems. Enhancements were also made to reduce the computational burden of an algorithm for obtaining minimal state space realizations from the inexact, multivariate transfer functions which result from the identification process. A number of potential adaptive control applications for this approach are illustrated using computer simulations. Results indicated that when speed of adaptation and plant stability are not critical, the proposed schemes converge to enhance system performance.

  7. A simple method for improving the time-stepping accuracy in atmosphere and ocean models

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-12-01

    In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A

  8. A robust data fusion scheme for integrated navigation systems employing fault detection methodology augmented with fuzzy adaptive filtering

    NASA Astrophysics Data System (ADS)

    Ushaq, Muhammad; Fang, Jiancheng

    2013-10-01

    Integrated navigation systems for various applications, generally employs the centralized Kalman filter (CKF) wherein all measured sensor data are communicated to a single central Kalman filter. The advantage of CKF is that there is a minimal loss of information and high precision under benign conditions. But CKF may suffer computational overloading, and poor fault tolerance. The alternative is the federated Kalman filter (FKF) wherein the local estimates can deliver optimal or suboptimal state estimate as per certain information fusion criterion. FKF has enhanced throughput and multiple level fault detection capability. The Standard CKF or FKF require that the system noise and the measurement noise are zero-mean and Gaussian. Moreover it is assumed that covariance of system and measurement noises remain constant. But if the theoretical and actual statistical features employed in Kalman filter are not compatible, the Kalman filter does not render satisfactory solutions and divergence problems also occur. To resolve such problems, in this paper, an adaptive Kalman filter scheme strengthened with fuzzy inference system (FIS) is employed to adapt the statistical features of contributing sensors, online, in the light of real system dynamics and varying measurement noises. The excessive faults are detected and isolated by employing Chi Square test method. As a case study, the presented scheme has been implemented on Strapdown Inertial Navigation System (SINS) integrated with the Celestial Navigation System (CNS), GPS and Doppler radar using FKF. Collectively the overall system can be termed as SINS/CNS/GPS/Doppler integrated navigation system. The simulation results have validated the effectiveness of the presented scheme with significantly enhanced precision, reliability and fault tolerance. Effectiveness of the scheme has been tested against simulated abnormal errors/noises during different time segments of flight. It is believed that the presented scheme can be

  9. Fine-Granularity Loading Schemes Using Adaptive Reed-Solomon Coding for xDSL-DMT Systems

    NASA Astrophysics Data System (ADS)

    Panigrahi, Saswat; Le-Ngoc, Tho

    2006-12-01

    While most existing loading algorithms for xDSL-DMT systems strive for the optimal energy distribution to maximize their rate, the amounts of bits loaded to subcarriers are constrained to be integers and the associated granularity losses can represent a significant percentage of the achievable data rate, especially in the presence of the peak-power constraint. To recover these losses, we propose a fine-granularity loading scheme using joint optimization of adaptive modulation and flexible coding parameters based on programmable Reed-Solomon (RS) codes and bit-error probability criterion. Illustrative examples of applications to VDSL-DMT systems indicate that the proposed scheme can offer a rate increase of about[InlineEquation not available: see fulltext.] in most cases as compared to various existing integer-bit-loading algorithms. This improvement is in good agreement with the theoretical estimates developed to quantify the granularity loss.

  10. Time-stepping stability of continuous and discontinuous finite-element methods for 3-D wave propagation

    NASA Astrophysics Data System (ADS)

    Mulder, W. A.; Zhebel, E.; Minisini, S.

    2014-02-01

    We analyse the time-stepping stability for the 3-D acoustic wave equation, discretized on tetrahedral meshes. Two types of methods are considered: mass-lumped continuous finite elements and the symmetric interior-penalty discontinuous Galerkin method. Combining the spatial discretization with the leap-frog time-stepping scheme, which is second-order accurate and conditionally stable, leads to a fully explicit scheme. We provide estimates of its stability limit for simple cases, namely, the reference element with Neumann boundary conditions, its distorted version of arbitrary shape, the unit cube that can be partitioned into six tetrahedra with periodic boundary conditions and its distortions. The Courant-Friedrichs-Lewy stability limit contains an element diameter for which we considered different options. The one based on the sum of the eigenvalues of the spatial operator for the first-degree mass-lumped element gives the best results. It resembles the diameter of the inscribed sphere but is slightly easier to compute. The stability estimates show that the mass-lumped continuous and the discontinuous Galerkin finite elements of degree 2 have comparable stability conditions, whereas the mass-lumped elements of degree one and three allow for larger time steps.

  11. New synchronization criteria for memristor-based networks: adaptive control and feedback control schemes.

    PubMed

    Li, Ning; Cao, Jinde

    2015-01-01

    In this paper, we investigate synchronization for memristor-based neural networks with time-varying delay via an adaptive and feedback controller. Under the framework of Filippov's solution and differential inclusion theory, and by using the adaptive control technique and structuring a novel Lyapunov functional, an adaptive updated law was designed, and two synchronization criteria were derived for memristor-based neural networks with time-varying delay. By removing some of the basic literature assumptions, the derived synchronization criteria were found to be more general than those in existing literature. Finally, two simulation examples are provided to illustrate the effectiveness of the theoretical results. PMID:25299765

  12. Adaptive control schemes for improving dynamic performance of efficiency-optimized induction motor drives.

    PubMed

    Kumar, Navneet; Raj Chelliah, Thanga; Srivastava, S P

    2015-07-01

    Model Based Control (MBC) is one of the energy optimal controllers used in vector-controlled Induction Motor (IM) for controlling the excitation of motor in accordance with torque and speed. MBC offers energy conservation especially at part-load operation, but it creates ripples in torque and speed during load transition, leading to poor dynamic performance of the drive. This study investigates the opportunity for improving dynamic performance of a three-phase IM operating with MBC and proposes three control schemes: (i) MBC with a low pass filter (ii) torque producing current (iqs) injection in the output of speed controller (iii) Variable Structure Speed Controller (VSSC). The pre and post operation of MBC during load transition is also analyzed. The dynamic performance of a 1-hp, three-phase squirrel-cage IM with mine-hoist load diagram is tested. Test results are provided for the conventional field-oriented (constant flux) control and MBC (adjustable excitation) with proposed schemes. The effectiveness of proposed schemes is also illustrated for parametric variations. The test results and subsequent analysis confer that the motor dynamics improves significantly with all three proposed schemes in terms of overshoot/undershoot peak amplitude of torque and DC link power in addition to energy saving during load transitions. PMID:25820090

  13. Variable grid-size and time-step finite difference method for seismic forward modeling and reverse-time migration

    NASA Astrophysics Data System (ADS)

    Wang, Yue

    get a good match between numerical results and observed field data. For ocean-bottom or land survey data associated with a low shear-velocity unconsolidated layer near the geophone locations, the variable grid FD method can be used to extrapolate wavefields using a fine grid for the shallow part and a coarse grid for the deep part. It is found that a staggered-grid reverse-time migration scheme can image both primary and multiple energy to their correct reflection positions by using both pressure and particle-velocity data. This is a new result in that migration can now be used to simultaneously image both primary and receiver-side pegleg reflections. The new variable time-step method can be used for the staggered-grid FD scheme and provides optimal computational savings. The combination of variable grid-size and time-step methods speeds up the reverse-time migration by more than ten times for the multicomponent data set in this thesis, compared to a standard reverse-time migration method.

  14. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    SciTech Connect

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-15

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very

  15. Persons with Multiple Disabilities Exercise Adaptive Response Schemes with the Help of Technology-Based Programs: Three Single-Case Studies

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Lang, Russell

    2012-01-01

    The present three single-case studies assessed the effectiveness of technology-based programs to help three persons with multiple disabilities exercise adaptive response schemes independently. The response schemes included (a) left and right head movements for a man who kept his head increasingly static on his wheelchair's headrest (Study I), (b)…

  16. Adaptive scheme for maintaining the performance of the in-home white-LED visible light wireless communications using OFDM

    NASA Astrophysics Data System (ADS)

    Chow, C. W.; Yeh, C. H.; Liu, Y. F.; Huang, P. Y.; Liu, Y.

    2013-04-01

    Spectral-efficient orthogonal frequency division multiplexing (OFDM) is a promising modulation format for the light-emitting-diode (LED) optical wireless (OW) visible light communication (VLC). VLC is a directional and line-of-sight communication; hence the offset of the optical receiver (Rx) and the LED light source will result in a large drop of received optical power. In order to keep the same luminance of the LED light source, we propose and demonstrate an adaptive control of the OFDM modulation-order to maintain the VLC transmission performance. Experimental results confirm the feasibility of the proposed scheme.

  17. An adaptive critic-based scheme for consensus control of nonlinear multi-agent systems

    NASA Astrophysics Data System (ADS)

    Heydari, Ali; Balakrishnan, S. N.

    2014-12-01

    The problem of decentralised consensus control of a network of heterogeneous nonlinear systems is formulated as an optimal tracking problem and a solution is proposed using an approximate dynamic programming based neurocontroller. The neurocontroller training comprises an initial offline training phase and an online re-optimisation phase to account for the fact that the reference signal subject to tracking is not fully known and available ahead of time, i.e., during the offline training phase. As long as the dynamics of the agents are controllable, and the communication graph has a directed spanning tree, this scheme guarantees the synchronisation/consensus even under switching communication topology and directed communication graph. Finally, an aerospace application is selected for the evaluation of the performance of the method. Simulation results demonstrate the potential of the scheme.

  18. Improvement of the multilayer perceptron for air quality modelling through an adaptive learning scheme

    NASA Astrophysics Data System (ADS)

    Hoi, K. I.; Yuen, K. V.; Mok, K. M.

    2013-09-01

    Multilayer perceptron (MLP), normally trained by the offline backpropagation algorithm, could not adapt to the changing air quality system and subsequently underperforms. To improve this, the extended Kalman filter is adopted into the learning algorithm to build a time-varying multilayer perceptron (TVMLP) in this study. Application of the TVMLP to model the daily averaged concentration of the respirable suspended particulates with aerodynamic diameter of not more than 10 µm (PM10) in Macau shows statistically significant improvement on the performance indicators over the MLP counterpart. In addition, the adaptive learning algorithm could also address explicitly the uncertainty of the prediction so that confidence intervals can be provided. More importantly, the adaptiveness of the TVMLP gives prediction improvement on the region of higher particulate concentrations that the public concerns.

  19. A novel data adaptive detection scheme for distributed fiber optic acoustic sensing

    NASA Astrophysics Data System (ADS)

    Ölçer, Íbrahim; Öncü, Ahmet

    2016-05-01

    We introduce a new approach for distributed fiber optic sensing based on adaptive processing of phase sensitive optical time domain reflectometry (Φ-OTDR) signals. Instead of conventional methods which utilizes frame averaging of detected signal traces, our adaptive algorithm senses a set of noise parameters to enhance the signal-to-noise ratio (SNR) for improved detection performance. This data set is called the secondary data set from which a weight vector for the detection of a signal is computed. The signal presence is sought in the primary data set. This adaptive technique can be used for vibration detection of health monitoring of various civil structures as well as any other dynamic monitoring requirements such as pipeline and perimeter security applications.

  20. An unconditionally energy stable finite difference scheme for a stochastic Cahn-Hilliard equation

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Qiao, ZhongHua; Zhang, Hui

    2016-09-01

    In this work, the MMC-TDGL equation, a stochastic Cahn-Hilliard equation is solved numerically by using the finite difference method in combination with a convex splitting technique of the energy functional. For the non-stochastic case, we develop an unconditionally energy stable difference scheme which is proved to be uniquely solvable. For the stochastic case, by adopting the same splitting of the energy functional, we construct a similar and uniquely solvable difference scheme with the discretized stochastic term. The resulted schemes are nonlinear and solved by Newton iteration. For the long time simulation, an adaptive time stepping strategy is developed based on both first- and second-order derivatives of the energy. Numerical experiments are carried out to verify the energy stability, the efficiency of the adaptive time stepping and the effect of the stochastic term.

  1. AZEuS: AN ADAPTIVE ZONE EULERIAN SCHEME FOR COMPUTATIONAL MAGNETOHYDRODYNAMICS

    SciTech Connect

    Ramsey, Jon P.; Clarke, David A.; Men'shchikov, Alexander B.

    2012-03-01

    A new adaptive mesh refinement (AMR) version of the ZEUS-3D astrophysical magnetohydrodynamical fluid code, AZEuS, is described. The AMR module in AZEuS has been completely adapted to the staggered mesh that characterizes the ZEUS family of codes on which scalar quantities are zone-centered and vector components are face-centered. In addition, for applications using static grids, it is necessary to use higher-order interpolations for prolongation to minimize the errors caused by waves crossing from a grid of one resolution to another. Finally, solutions to test problems in one, two, and three dimensions in both Cartesian and spherical coordinates are presented.

  2. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  3. Design and Experimental Evaluation of a Robust Position Controller for an Electrohydrostatic Actuator Using Adaptive Antiwindup Sliding Mode Scheme

    PubMed Central

    Lee, Ji Min; Park, Sung Hwan; Kim, Jong Shik

    2013-01-01

    A robust control scheme is proposed for the position control of the electrohydrostatic actuator (EHA) when considering hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. To reduce overshoot due to a saturation of electric motor and to realize robustness against load disturbance and lumped system uncertainties such as varying parameters and modeling error, this paper proposes an adaptive antiwindup PID sliding mode scheme as a robust position controller for the EHA system. An optimal PID controller and an optimal anti-windup PID controller are also designed to compare control performance. An EHA prototype is developed, carrying out system modeling and parameter identification in designing the position controller. The simply identified linear model serves as the basis for the design of the position controllers, while the robustness of the control systems is compared by experiments. The adaptive anti-windup PID sliding mode controller has been found to have the desired performance and become robust against hardware saturation, load disturbance, and lumped system uncertainties and nonlinearities. PMID:23983640

  4. A video coding scheme based on joint spatiotemporal and adaptive prediction.

    PubMed

    Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken

    2009-05-01

    We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed. PMID:19342337

  5. An Adaptive Scheme for Robot Localization and Mapping with Dynamically Configurable Inter-Beacon Range Measurements

    PubMed Central

    Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal

    2014-01-01

    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938

  6. An adaptive scheme for robot localization and mapping with dynamically configurable inter-beacon range measurements.

    PubMed

    Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal

    2014-01-01

    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938

  7. Implementation of a mesh adaptive scheme based on an element-level error indicator

    NASA Technical Reports Server (NTRS)

    Keating, Scott; Felippa, Carlos A.; Militello, Carmelo

    1993-01-01

    We investigate the formulation and application of element-level error indicators based on parametrized variational principles. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited to drive adaptive mesh refinement on parallel computers where access to neighboring elements resident on different processors may incur significant computational overhead. Furthermore, such indicators are not affected by physical jumps at junctures or interfaces. An element-level indicator has been derived from the higher-order element energy and applied to r and h mesh adaptation of meshes in plates and shell structures. We report on our initial experiments with a cylindrical shell that intersects with fist plates forming a simplified 'wing-body intersection' benchmark problem.

  8. Unstaggered Central Schemes for Hyperbolic Systems

    NASA Astrophysics Data System (ADS)

    Touma, R.

    2009-09-01

    We develop an unstaggered central scheme for approximating the solution of general two-dimensional hyperbolic systems. In particular, we are interested in solving applied problems arising in hydrodynamics and astrophysics. In contrast with standard central schemes that evolve the numerical solution on two staggered grids at consecutive time steps, the method we propose evolves the numerical solution on a single grid, and avoids the resolution of the Riemann problems arising at the cell interfaces, thanks to a layer of ghost cells implicitly used. The numerical base scheme is used to solve shallow water equation problems and ideal magnetohydrodynamic problems. To satisfy the divergence-free constraint of the magnetic field in the numerical solution of ideal magnetohydrodynamic problems, we adapt Evans and Hawley's the constrained transport method to our unstaggered base scheme, and apply it to correct the magnetic field components at the end of each time step. The obtained results are in good agreement with corresponding ones appearing in the recent literature, thus confirming the efficiency and the potential of the proposed method.

  9. Time-step limits for a Monte Carlo Compton-scattering method

    SciTech Connect

    Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B

    2009-01-01

    We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.

  10. Classification of ring artifacts for their effective removal using type adaptive correction schemes.

    PubMed

    Anas, Emran Mohammad Abu; Lee, Soo Yeol; Hasan, Kamrul

    2011-06-01

    High resolution tomographic images acquired with a digital X-ray detector are often degraded by the so called ring artifacts. In this paper, a detail analysis including the classification, detection and correction of these ring artifacts is presented. At first, a novel idea for classifying rings into two categories, namely type I and type II rings, is proposed based on their statistical characteristics. The defective detector elements and the dusty scintillator screens result in type I ring and the mis-calibrated detector elements lead to type II ring. Unlike conventional approaches, we emphasize here on the separate detection and correction schemes for each type of rings for their effective removal. For the detection of type I ring, the histogram of the responses of the detector elements is used and a modified fast image inpainting algorithm is adopted to correct the responses of the defective pixels. On the other hand, to detect the type II ring, first a simple filtering scheme is presented based on the fast Fourier transform (FFT) to smooth the sum curve derived form the type I ring corrected projection data. The difference between the sum curve and its smoothed version is then used to detect their positions. Then, to remove the constant bias suffered by the responses of the mis-calibrated detector elements with view angle, an estimated dc shift is subtracted from them. The performance of the proposed algorithm is evaluated using real micro-CT images and is compared with three recently reported algorithms. Simulation results demonstrate superior performance of the proposed technique as compared to the techniques reported in the literature. PMID:21513928

  11. PIC Algorithm with Multiple Poisson Equation Solves During One Time Step

    NASA Astrophysics Data System (ADS)

    Ren, Junxue; Godar, Trenton; Menart, James; Mahalingam, Sudhakar; Choi, Yongjun; Loverich, John; Stoltz, Peter H.

    2015-09-01

    In order to reduce the overall computational time of a PIC (particle-in-cell) computer simulation, an attempt was made to utilize larger time step sizes by implementing multiple solutions of Poisson's equation within one time step. The hope was this would make the PIC simulation stable at larger time steps than an explicit technique can use, and using larger time steps would reduce the overall computational time, even though the computational time per time step would increase. A three-dimensional PIC code that tracks electrons and ions throughout a three-dimensional Cartesian computational domain is used to perform this study. The results of altering the number of times Poisson's equation is solved during a single time step are presented. Also, the size of the time that can be used and still maintain a stable solution is surveyed. The results indicate that using multiple Poisson solves during one time step provides some ability to use larger time steps in PIC simulations, but the increase in time step size is not significant and the overall simulation run time is not reduced

  12. An adaptive lattice Boltzmann scheme for modeling two-fluid-phase flow in porous medium systems

    NASA Astrophysics Data System (ADS)

    Dye, Amanda L.; McClure, James E.; Adalsteinsson, David; Miller, Cass T.

    2016-04-01

    We formulate a multiple-relaxation-time (MRT) lattice-Boltzmann method (LBM) to simulate two-fluid-phase flow in porous medium systems. The MRT LBM is applied to simulate the displacement of a wetting fluid by a nonwetting fluid in a system corresponding to a microfluidic cell. Analysis of the simulation shows widely varying time scales for the dynamics of fluid pressures, fluid saturations, and interfacial curvatures that are typical characteristics of such systems. Displacement phenomena include Haines jumps, which are relatively short duration isolated events of rapid fluid displacement driven by capillary instability. An adaptive algorithm is advanced using a level-set method to locate interfaces and estimate their rate of advancement. Because the displacement dynamics are confined to the interfacial regions for a majority of the relaxation time, the computational effort is focused on these regions. The proposed algorithm is shown to reduce computational effort by an order of magnitude, while yielding essentially identical solutions to a conventional fully coupled approach. The challenges posed by Haines jumps are also resolved by the adaptive algorithm. Possible extensions to the advanced method are discussed.

  13. A Muscle Synergy-Inspired Adaptive Control Scheme for a Hybrid Walking Neuroprosthesis

    PubMed Central

    Alibeji, Naji A.; Kirsch, Nicholas Andrew; Sharma, Nitin

    2015-01-01

    A hybrid neuroprosthesis that uses an electric motor-based wearable exoskeleton and functional electrical stimulation (FES) has a promising potential to restore walking in persons with paraplegia. A hybrid actuation structure introduces effector redundancy, making its automatic control a challenging task because multiple muscles and additional electric motor need to be coordinated. Inspired by the muscle synergy principle, we designed a low dimensional controller to control multiple effectors: FES of multiple muscles and electric motors. The resulting control system may be less complex and easier to control. To obtain the muscle synergy-inspired low dimensional control, a subject-specific gait model was optimized to compute optimal control signals for the multiple effectors. The optimal control signals were then dimensionally reduced by using principal component analysis to extract synergies. Then, an adaptive feedforward controller with an update law for the synergy activation was designed. In addition, feedback control was used to provide stability and robustness to the control design. The adaptive-feedforward and feedback control structure makes the low dimensional controller more robust to disturbances and variations in the model parameters and may help to compensate for other time-varying phenomena (e.g., muscle fatigue). This is proven by using a Lyapunov stability analysis, which yielded semi-global uniformly ultimately bounded tracking. Computer simulations were performed to test the new controller on a 4-degree of freedom gait model. PMID:26734606

  14. Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi

    2013-03-01

    Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.

  15. From Classical to Quantum and Back: A Hamiltonian Scheme for Adaptive Multiresolution Classical/Path-Integral Simulations.

    PubMed

    Kreis, Karsten; Tuckerman, Mark E; Donadio, Davide; Kremer, Kurt; Potestio, Raffaello

    2016-07-12

    Quantum delocalization of atomic nuclei affects the physical properties of many hydrogen-rich liquids and biological systems even at room temperature. In computer simulations, quantum nuclei can be modeled via the path-integral formulation of quantum statistical mechanics, which implies a substantial increase in computational overhead. By restricting the quantum description to a small spatial region, this cost can be significantly reduced. Herein, we derive a bottom-up, rigorous, Hamiltonian-based scheme that allows molecules to change from quantum to classical and vice versa on the fly as they diffuse through the system, both reducing overhead and making quantum grand-canonical simulations possible. The method is validated via simulations of low-temperature parahydrogen. Our adaptive resolution approach paves the way to efficient quantum simulations of biomolecules, membranes, and interfaces. PMID:27214610

  16. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  17. Lyapunov exponents and adaptive mesh refinement for high-speed flows using a discontinuous Galerkin scheme

    NASA Astrophysics Data System (ADS)

    Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.

    2016-08-01

    This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.

  18. Adaptive Kalman filter implementation by a neural network scheme for tracking maneuvering targets

    NASA Astrophysics Data System (ADS)

    Amoozegar, Farid; Sundareshan, Malur K.

    1995-07-01

    Conventional target tracking algorithms based on linear estimation techniques perform quite efficiently when the target motion does not involve maneuvers. Target maneuvers involving short term accelerations, however, cause a bias (e.g. jump) in the measurement sequence, which unless compensated, results in divergence of the Kalman filter that provides estimates of target position and velocity, in turn leading to a loss of track. Accurate compensation for the bias requires processing more samples of the input signals which adds to the computational complexity. The waiting time for more samples can also result in a total loss of track since the target can begin a new maneuver and if the target begins a new maneuver before the first one is compensated for, the filter would never converge. Most of the proposed algorithms in the current literature hence have the disadvantage of losing the target in short term accelerations, i.e., when the duration of acceleration is comparable to the time period between the measurements. The time lag for maneuver modelings, which have been based on Bayesian probability calculations and linear estimation shall propose a neural network scheme for the modeling of target maneuvers. The primary motivation for employing compensation. The parallel processing capability of a properly trained neural network can permit fast processing of features to yield correct acceleration estimates and hence can take the burden off the primary Kalman filter which still provides the target position and velocity estimates.

  19. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  20. Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments

    SciTech Connect

    Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao

    2009-05-20

    Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmark the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.

  1. Parallel adaptive mesh-refining scheme on a three-dimensional unstructured tetrahedral mesh and its applications

    NASA Astrophysics Data System (ADS)

    Lian, Y.-Y.; Hsu, K.-H.; Shao, Y.-L.; Lee, Y.-M.; Jeng, Y.-W.; Wu, J.-S.

    2006-12-01

    The development of a parallel three-dimensional (3-D) adaptive mesh refinement (PAMR) scheme for an unstructured tetrahedral mesh using dynamic domain decomposition on a memory-distributed machine is presented in detail. A memory-saving cell-based data structure is designed such that the resulting mesh information can be readily utilized in both node- or cell-based numerical methods. The general procedures include isotropic refinement from one parent cell into eight child cells and then followed by anisotropic refinement which effectively removes hanging nodes. A simple but effective mesh-quality control mechanism is employed to preserve the mesh quality. The resulting parallel performance of this PAMR is found to scale approximately as N for N⩽32. Two test cases, including a particle method (parallel DSMC solver for rarefied gas dynamics) and an equation-based method (parallel Poisson-Boltzmann equation solver for electrostatic field), are used to demonstrate the generality of the PAMR module. It is argued that this PAMR scheme can be applied in any numerical method if the unstructured tetrahedral mesh is adopted.

  2. Score level fusion scheme based on adaptive local Gabor features for face-iris-fingerprint multimodal biometric

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying

    2014-05-01

    A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.

  3. A high-order solver for unsteady incompressible Navier-Stokes equations using the flux reconstruction method on unstructured grids with implicit dual time stepping

    NASA Astrophysics Data System (ADS)

    Cox, Christopher; Liang, Chunlei; Plesniak, Michael W.

    2016-06-01

    We report development of a high-order compact flux reconstruction method for solving unsteady incompressible flow on unstructured grids with implicit dual time stepping. The method falls under the class of methods now referred to as flux reconstruction/correction procedure via reconstruction. The governing equations employ Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. An implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation within the context of the high-order flux reconstruction method. This compact high-order method is very suitable for parallel computing and can easily be extended to moving and deforming grids.

  4. A high-order solver for unsteady incompressible Navier-Stokes equations using the flux reconstruction method on unstructured grids with implicit dual time stepping

    NASA Astrophysics Data System (ADS)

    Cox, Christopher; Liang, Chunlei; Plesniak, Michael

    2015-11-01

    This paper reports development of a high-order compact method for solving unsteady incompressible flow on unstructured grids with implicit time stepping. The method falls under the class of methods now referred to as flux reconstruction/correction procedure via reconstruction. The governing equations employ the classical artificial compressibility treatment, where dual time stepping is needed to solve unsteady flow problems. An implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time-stepping scheme. Three-dimensional results computed on many processing elements will be presented. The high-order method is very suitable for parallel computing and can easily be extended to moving and deforming grids. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation within the context of the high-order flux reconstruction method. Financial support provided under the GW Presidential Merit Fellowship.

  5. Simulations of precipitation using the Community Earth System Model (CESM): Sensitivity to microphysics time step

    NASA Astrophysics Data System (ADS)

    Murthi, A.; Menon, S.; Sednev, I.

    2011-12-01

    An inherent difficulty in the ability of global climate models to accurately simulate precipitation lies in the use of a large time step, Δt (usually 30 minutes), to solve the governing equations. Since microphysical processes are characterized by small time scales compared to Δt, finite difference approximations used to advance microphysics equations suffer from numerical instability and large time truncation errors. With this in mind, the sensitivity of precipitation simulated by the atmospheric component of CESM, namely the Community Atmosphere Model (CAM 5.1), to the microphysics time step (τ) is investigated. Model integrations are carried out for a period of five years with a spin up time of about six months for a horizontal resolution of 2.5 × 1.9 degrees and 30 levels in the vertical, with Δt = 1800 s. The control simulation with τ = 900 s is compared with one using τ = 300 s for accumulated precipitation and radi- ation budgets at the surface and top of the atmosphere (TOA), while keeping Δt fixed. Our choice of τ = 300 s is motivated by previous work on warm rain processes wherein it was shown that a value of τ around 300 s was necessary, but not sufficient, to ensure positive definiteness and numerical stability of the explicit time integration scheme used to integrate the microphysical equations. However, since the entire suite of microphysical processes are represented in our case, we suspect that this might impose additional restrictions on τ. The τ = 300 s case produces differences in large-scale accumulated rainfall from the τ = 900 s case by as large as 200 mm, over certain regions of the globe. The spatial patterns of total accumulated precipitation using τ = 300 s are in closer agreement with satellite observed precipitation, when compared to the τ = 900 s case. Differences are also seen in the radiation budget with the τ = 300 (900) s cases producing surpluses that range between 1-3 W/m2 at both the TOA and surface in the global

  6. Adaptive block-wise alphabet reduction scheme for lossless compression of images with sparse and locally sparse histograms

    NASA Astrophysics Data System (ADS)

    Masmoudi, Atef; Zouari, Sonia; Ghribi, Abdelaziz

    2015-11-01

    We propose a new adaptive block-wise lossless image compression algorithm, which is based on the so-called alphabet reduction scheme combined with an adaptive arithmetic coding (AC). This new encoding algorithm is particularly efficient for lossless compression of images with sparse and locally sparse histograms. AC is a very efficient technique for lossless data compression and produces a rate that is close to the entropy; however, a compression performance loss occurs when encoding images or blocks with a limited number of active symbols by comparison with the number of symbols in the nominal alphabet, which consists in the amplification of the zero frequency problem. Generally, most methods add one to the frequency count of each symbol from the nominal alphabet, which leads to a statistical model distortion, and therefore reduces the efficiency of the AC. The aim of this work is to overcome this drawback by assigning to each image block the smallest possible set including all the existing symbols called active symbols. This is an alternative of using the nominal alphabet when applying the conventional arithmetic encoders. We show experimentally that the proposed method outperforms several lossless image compression encoders and standards including the conventional arithmetic encoders, JPEG2000, and JPEG-LS.

  7. Region of interest based robust watermarking scheme for adaptation in small displays

    NASA Astrophysics Data System (ADS)

    Vivekanandhan, Sapthagirivasan; K. B., Kishore Mohan; Vemula, Krishna Manohar

    2010-02-01

    Now-a-days Multimedia data can be easily replicated and the copyright is not legally protected. Cryptography does not allow the use of digital data in its original form and once the data is decrypted, it is no longer protected. Here we have proposed a new double protected digital image watermarking algorithm, which can embed the watermark image blocks into the adjacent regions of the host image itself based on their blocks similarity coefficient which is robust to various noise effects like Poisson noise, Gaussian noise, Random noise and thereby provide double security from various noises and hackers. As instrumentation application requires a much accurate data, the watermark image which is to be extracted back from the watermarked image must be immune to various noise effects. Our results provide better extracted image compared to the present/existing techniques and in addition we have done resizing the same for various displays. Adaptive resizing for various size displays is being experimented wherein we crop the required information in a frame, zoom it for a large display or resize for a small display using a threshold value and in either cases background is not given much importance but it is only the fore-sight object which gains importance which will surely be helpful in performing surgeries.

  8. Performance analysis of an adaptive multiple access scheme for the message service of a land mobile satellite experiment (MSAT-X)

    NASA Technical Reports Server (NTRS)

    Yan, T.-Y.; Li, V. O. K.

    1984-01-01

    This paper describes an Adaptive Mobile Access Protocol (AMAP) for the message service of MSAT-X., a proposed experimental mobile satellite communication network. Message lengths generated by the mobiles are assumed to be uniformly distributed. The mobiles are dispersed over a wide geographical area and the channel data rate is limited. AMAP is a reservation based multiple access scheme. The available bandwidth is divided into subchannels, which are divided into reservation and message channels. The ALOHA multiple access scheme is employed in the reservation channels, while the message channels are demand assigned. AMAP adaptively reallocates the reservation and message channels to optimize the total average message delay.

  9. IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D

    SciTech Connect

    Cumberland, R.; Mesina, G.

    2009-01-01

    The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.

  10. Stable Collocated-grid Finite Difference Seismic Wave Modeling Using Discontinuous Grids with Locally Variable Time Steps

    NASA Astrophysics Data System (ADS)

    Li, H.; Zhang, Z.; Chen, X.

    2012-12-01

    It is widely accepted that they are oversampled in spatial grid spacing and temporal time step in the high speed medium if uniform grids are used for the numerical simulation. This oversampled grid spacing and time step will lower the efficiency of the calculation, especially high velocity contrast exists. Based on the collocated-grid finite-difference method (FDM), we present an algorithm of spatial discontinuous grid, with localized grid blocks and locally varying time steps, which will increase the efficiency of simulation of seismic wave propagation and earthquake strong ground motion. According to the velocity structure, we discretize the model into discontinuous grid blocks, and the time step of each block is determined according to the local stability. The key problem of the discontinuous grid method is the connection between grid blocks with different grid spacing. We use a transitional area overlapped by both of the finer and the coarser grids to deal with the problem. In the transitional area, the values of finer ghost points are obtained by interpolation from the coarser grid in space and time domain, while the values of coarser ghost points are obtained by downsampling from the finer grid. How to deal with coarser ghost points can influent the stability of long time simulation. After testing different downsampling methods and finally we choose the Gaussian filtering. Basically, 4th order Rung-Kutta scheme will be used for the time integral for our numerical method. For our discontinuous grid FDM, discontinuous time steps for the coarser and the finer grids will be used to increase the simulation efficiency. Numerical tests indicate that our method can provide a stable solution even for the long time simulation without any additional filtration for grid spacing ratio n=2. And for larger grid spacing ratio, Gaussian filtration could be used to preserve the stability. With the collocated-grid FDM, which is flexible and accurate in implementation of free

  11. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    SciTech Connect

    Omelyan, Igor E-mail: omelyan@icmp.lviv.ua; Kovalenko, Andriy

    2013-12-28

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  12. The constant displacement scheme for tracking particles in heterogeneous aquifers

    SciTech Connect

    Wen, X.H.; Gomez-Hernandez, J.J.

    1996-01-01

    Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times faster than using the constant time step scheme.

  13. A Conceptual Scheme for an Adaptation of Participation Training in Adult Education for Use in the Three Love Movement of Japan.

    ERIC Educational Resources Information Center

    Kamitsuka, Arthur Jun

    This study concentrated on developing a conceptual scheme for adapting participation training, an adult education approach based on democratic concepts and practices, to the Three Love Movement (Love of God, Love of Soil, Love of Man) in Japan. (This Movement is an outgrowth of Protestant folk schools.) While democratization is an aim, the…

  14. Multi-dimensional Upwind Fluctuation Splitting Scheme with Mesh Adaption for Hypersonic Viscous Flow. Degree awarded by Virginia Polytechnic Inst. and State Univ., 9 Nov. 2001

    NASA Technical Reports Server (NTRS)

    Wood, William A., III

    2002-01-01

    A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.

  15. Multirate Runge-Kutta schemes for advection equations

    NASA Astrophysics Data System (ADS)

    Schlegel, Martin; Knoth, Oswald; Arnold, Martin; Wolke, Ralf

    2009-04-01

    Explicit time integration methods can be employed to simulate a broad spectrum of physical phenomena. The wide range of scales encountered lead to the problem that the fastest cell of the simulation dictates the global time step. Multirate time integration methods can be employed to alter the time step locally so that slower components take longer and fewer time steps, resulting in a moderate to substantial reduction of the computational cost, depending on the scenario to simulate [S. Osher, R. Sanders, Numerical approximations to nonlinear conservation laws with locally varying time and space grids, Math. Comput. 41 (1983) 321-336; H. Tang, G. Warnecke, A class of high resolution schemes for hyperbolic conservation laws and convection-diffusion equations with varying time and pace grids, SIAM J. Sci. Comput. 26 (4) (2005) 1415-1431; E. Constantinescu, A. Sandu, Multirate timestepping methods for hyperbolic conservation laws, SIAM J. Sci. Comput. 33 (3) (2007) 239-278]. In air pollution modeling the advection part is usually integrated explicitly in time, where the time step is constrained by a locally varying Courant-Friedrichs-Lewy (CFL) number. Multirate schemes are a useful tool to decouple different physical regions so that this constraint becomes a local instead of a global restriction. Therefore it is of major interest to apply multirate schemes to the advection equation. We introduce a generic recursive multirate Runge-Kutta scheme that can be easily adapted to an arbitrary number of refinement levels. It preserves the linear invariants of the system and is of third order accuracy when applied to certain explicit Runge-Kutta methods as base method.

  16. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    SciTech Connect

    Densmore, Jeffery D. Warsa, James S. Lowrie, Robert B.

    2010-05-20

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  17. Numerical Relativistic Magnetohydrodynamics with ADER Discontinuous Galerkin methods on adaptively refined meshes.

    NASA Astrophysics Data System (ADS)

    Zanotti, O.; Dumbser, M.; Fambri, F.

    2016-05-01

    We describe a new method for the solution of the ideal MHD equations in special relativity which adopts the following strategy: (i) the main scheme is based on Discontinuous Galerkin (DG) methods, allowing for an arbitrary accuracy of order N+1, where N is the degree of the basis polynomials; (ii) in order to cope with oscillations at discontinuities, an ”a-posteriori” sub-cell limiter is activated, which scatters the DG polynomials of the previous time-step onto a set of 2N+1 sub-cells, over which the solution is recomputed by means of a robust finite volume scheme; (iii) a local spacetime Discontinuous-Galerkin predictor is applied both on the main grid of the DG scheme and on the sub-grid of the finite volume scheme; (iv) adaptive mesh refinement (AMR) with local time-stepping is used. We validate the new scheme and comment on its potential applications in high energy astrophysics.

  18. Fast and Adaptive Detection of Pulmonary Nodules in Thoracic CT Images Using a Hierarchical Vector Quantization Scheme

    PubMed Central

    Han, Hao; Li, Lihong; Han, Fangfang; Song, Bowen; Moore, William; Liang, Zhengrong

    2014-01-01

    Computer-aided detection (CADe) of pulmonary nodules is critical to assisting radiologists in early identification of lung cancer from computed tomography (CT) scans. This paper proposes a novel CADe system based on a hierarchical vector quantization (VQ) scheme. Compared with the commonly-used simple thresholding approach, high-level VQ yields a more accurate segmentation of the lungs from the chest volume. In identifying initial nodule candidates (INCs) within the lungs, low-level VQ proves to be effective for INCs detection and segmentation, as well as computationally efficient compared to existing approaches. False-positive (FP) reduction is conducted via rule-based filtering operations in combination with a feature-based support vector machine classifier. The proposed system was validated on 205 patient cases from the publically available on-line LIDC (Lung Image Database Consortium) database, with each case having at least one juxta-pleural nodule annotation. Experimental results demonstrated that our CADe system obtained an overall sensitivity of 82.7% at a specificity of 4 FPs/scan, and 89.2% sensitivity at 4.14 FPs/scan for the classification of juxta-pleural INCs only. With respect to comparable CADe systems, the proposed system shows outperformance and demonstrates its potential for fast and adaptive detection of pulmonary nodules via CT imaging. PMID:25486657

  19. Adaptive Yaw Rate Aware Sensor Wakeup Schemes Protocol (A-YAP) for Target Prediction and Tracking in Sensor Networks

    NASA Astrophysics Data System (ADS)

    Raza, Muhammad Taqi; Mir, Zeeshan Hameed; Akbar, Ali Hammad; Yoo, Seung-Wha; Kim, Ki-Hyung

    Target tracking is one of the key applications of Wireless Sensor Networks (WSNs) that forms basis for numerous other applications. The overall procedures of target tracking involve target detection, localization, and tracking. Because of the WSNs' resource constraints (especially energy), it is highly desired that target tracking should be done by involving as less number of sensor nodes as possible. Due to the uncertain behavior of the target and resulting mobility patterns, this goal becomes harder to achieve without predicting the future locations of the target. The presence of a prediction mechanism may allow the activation of only the relevant sensors along the future course, before actually the target reaches the future location. This prior activation contributes to increasing the overall sensor networks lifetime by letting non-relevant nodes sleep. In this paper, first, we introduce a Yaw rate aware sensor wAkeup Protocol (YAP) for the prediction of future target locations. Second, we present improvements on the YAP design through the incorporation of adaptability. The proposed schemes are distributive in nature, and select relevant sensors to determine the target track. The performance of YAP and A-YAP is also discussed on different mobility patterns, which confirms the efficacy of the algorithm.

  20. Electric and magnetic losses modeled by a stable hybrid with explicit-implicit time-stepping for Maxwell's equations

    SciTech Connect

    Halleroed, Tomas Rylander, Thomas

    2008-04-20

    A stable hybridization of the finite-element method (FEM) and the finite-difference time-domain (FDTD) scheme for Maxwell's equations with electric and magnetic losses is presented for two-dimensional problems. The hybrid method combines the flexibility of the FEM with the efficiency of the FDTD scheme and it is based directly on Ampere's and Faraday's law. The electric and magnetic losses can be treated implicitly by the FEM on an unstructured mesh, which allows for local mesh refinement in order to resolve rapid variations in the material parameters and/or the electromagnetic field. It is also feasible to handle larger homogeneous regions with losses by the explicit FDTD scheme connected to an implicitly time-stepped and lossy FEM region. The hybrid method shows second-order convergence for smooth scatterers. The bistatic radar cross section (RCS) for a circular metal cylinder with a lossy coating converges to the analytical solution and an accuracy of 2% is achieved for about 20 points per wavelength. The monostatic RCS for an airfoil that features sharp corners yields a lower order of convergence and it is found to agree well with what can be expected for singular fields at the sharp corners. A careful convergence study with resolutions from 20 to 140 points per wavelength provides accurate extrapolated results for this non-trivial test case, which makes it possible to use as a reference problem for scattering codes that model both electric and magnetic losses.

  1. Time Step Rescaling Recovers Continuous-Time Dynamical Properties for Discrete-Time Langevin Integration of Nonequilibrium Systems

    PubMed Central

    2015-01-01

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts. PMID:24555448

  2. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGESBeta

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  3. Modified Chebyshev pseudospectral method with O(N exp -1) time step restriction

    NASA Technical Reports Server (NTRS)

    Kosloff, Dan; Tal-Ezer, Hillel

    1989-01-01

    The extreme eigenvalues of the Chebyshev pseudospectral differentiation operator are O(N exp 2) where N is the number of grid points. As a result of this, the allowable time step in an explicit time marching algorithm is O(N exp -2) which, in many cases, is much below the time step dictated by the physics of the partial differential equation. A new set of interpolating points is introduced such that the eigenvalues of the differentiation operator are O(N) and the allowable time step is O(N exp -1). The properties of the new algorithm are similar to those of the Fourier method. The new algorithm also provides a highly accurate solution for non-periodic boundary value problems.

  4. Time-step limits for a Monte Carlo Compton-scattering method

    SciTech Connect

    Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B

    2008-01-01

    Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the

  5. A GPU-accelerated adaptive discontinuous Galerkin method for level set equation

    NASA Astrophysics Data System (ADS)

    Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.

    2016-01-01

    This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.

  6. Suggestions for CAP-TSD mesh and time-step input parameters

    NASA Technical Reports Server (NTRS)

    Bland, Samuel R.

    1991-01-01

    Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.

  7. A Monolithic Multi-Time-Step Computational Framework for Transient Advective-Diffusive-Reactive Systems

    NASA Astrophysics Data System (ADS)

    Karimi, S.; Nakshatrala, K. B.

    2014-12-01

    Advection-Diffusion-Reaction (ADR) equations play a crucial role in simulating numerous geo- physical phenomena. It is well-known that the solution to these equations exhibit disparate spatial and temporal scales. These mathematical scales occur due to relative dominance of either advec- tion, diffusion, or reaction processes. Hence, in a careful simulation, one has to choose appropriate time-integrators, time-steps, and numerical formulations for spatial discretization. Multi-time-step coupling methods allow specific choice of integration methods (either temporal or spatial) in dif- ferent regions of the spatial domain. In recent years, most of the attempts to design monolithic multi-time-step frameworks favored second-order transient systems in structural dynamics. In this presentation, we will introduce monolithic multi-time-step computational frameworks for ADR equations. These methods are based on the theory of differential/algebraic equations. We shall also provide an overview of results from stability analysis, study of drift from compatibility con- straints, and analysis of influence of perturbations. Several benchmark problems will be utilized to demonstrate the theoretical findings and features of the proposed frameworks. Finally, application of the proposed methods to fast bimolecular reactive systems will be shown.

  8. Dependence of Hurricane intensity and structures on vertical resolution and time-step size

    NASA Astrophysics Data System (ADS)

    Zhang, Da-Lin; Wang, Xiaoxue

    2003-09-01

    In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.

  9. Emotional Development and Adaptive Abilities in Adults with Intellectual Disability. A Correlation Study between the Scheme of Appraisal of Emotional Development (SAED) and Vineland Adaptive Behavior Scale (VABS)

    ERIC Educational Resources Information Center

    La Malfa, Giampaolo; Lassi, Stefano; Bertelli, Marco; Albertini, Giorgio; Dosen, Anton

    2009-01-01

    The importance of emotional aspects in developing cognitive and social abilities has already been underlined by many authors even if there is no unanimous agreement on the factors constituting adaptive abilities, nor is there any on the way to measure them or on the relation between adaptive ability and cognitive level. The purposes of this study…

  10. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    NASA Astrophysics Data System (ADS)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  11. Error correction in short time steps during the application of quantum gates

    NASA Astrophysics Data System (ADS)

    de Castro, L. A.; Napolitano, R. d. J.

    2016-04-01

    We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.

  12. Persons with multiple disabilities exercise adaptive response schemes with the help of technology-based programs: three single-case studies.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Lang, Russell

    2012-01-01

    The present three single-case studies assessed the effectiveness of technology-based programs to help three persons with multiple disabilities exercise adaptive response schemes independently. The response schemes included (a) left and right head movements for a man who kept his head increasingly static on his wheelchair's headrest (Study I), (b) left- and right-arm movements for a woman who tended to hold both arms/hands tight against her body (Study II), and (c) touching object cues on a computer screen for a girl who rarely used her residual vision for orienting/guiding her hand responses. The technology involved microswitches/sensors to detect the response schemes and a computer/control system to record their occurrences and activate preferred stimuli contingent on them. Results showed large increases in the response schemes targeted for each of the three participants during the intervention phases of the studies. The importance of using technology-based programs as tools for enabling persons with profound and multiple disabilities to practice relevant responses independently was discussed. PMID:22240142

  13. The Semi-implicit Time-stepping Algorithm in MH4D

    NASA Astrophysics Data System (ADS)

    Vadlamani, Srinath; Shumlak, Uri; Marklin, George; Meier, Eric; Lionello, Roberto

    2006-10-01

    The Plasma Science and Innovation Center (PSI Center) at the University of Washington is developing MHD codes to accurately model Emerging Concept (EC) devices. Examination of the semi-implicit time stepping algorithm implemented in the tetrahedral mesh MHD simulation code, MH4D, is presented. The time steps for standard explicit methods, which are constrained by the Courant-Friedrichs-Lewy (CFL) condition, are typically small for simulations of EC experiments due to the large Alfven speed. The CFL constraint is more severe with a tetrahedral mesh because of the irregular cell geometry. The semi-implicit algorithm [1] removes the fast waves constraint, thus allowing for larger time steps. We will present the implementation method of this algorithm, and numerical results for test problems in simple geometry. Also, we will present the effectiveness in simulations of complex geometry, similar to the ZaP [2] experiment at the University of Washington. References: [1]Douglas S. Harned and D. D. Schnack, Semi-implicit method for long time scale magnetohy drodynamic computations in three dimensions, JCP, Volume 65, Issue 1, July 1986, Pages 57-70. [2]U. Shumlak, B. A. Nelson, R. P. Golingo, S. L. Jackson, E. A. Crawford, and D. J. Den Hartog, Sheared flow stabilization experiments in the ZaP flow Zpinch, Phys. Plasmas 10, 1683 (2003).

  14. Modelling of Thermal Advective Reactive Flow in Hydrothermal Mineral Systems Using an Implicit Time-stepped Finite Element Method.

    NASA Astrophysics Data System (ADS)

    Hornby, P. G.

    2005-12-01

    Understanding chemical and thermal processes taking place in hydrothermal mineral deposition systems could well be a key to unlocking new mineral reserves through improved targeting of exploration efforts. To aid in this understanding it is very helpful to be able to model such processes with sufficient fidelity to test process hypotheses. To gain understanding, it is often sufficient to obtain semi-quantitative results that model the broad aspects of the complex set of thermal and chemical effects taking place in hydrothermal systems. For example, it is often sufficient to gain an understanding of where thermal, geometric and chemical factors converge to precipitate gold (say) without being perfectly precise about how much gold is precipitated. The traditional approach is to use incompressible Darcy flow together with the Boussinesq approximation. From the flow field, the heat equation is used to advect-conduct the heat. The flow field is also used to transport solutes by solving an advection-dispersion-diffusion equation. The reactions in the fluid and between fluid and rock act as source terms for these advection-dispersion equations. Many existing modelling systems that are used for simulating such systems use explicit time marching schemes and finite differences. The disadvantage of this approach is the need to work on rectilinear grids and the number of time steps required by the Courant condition in the solute transport step. The second factor can be particularly significant if the chemical system is complex, requiring (at a minimum) an equilibrium calculation at each grid point at each time step. In the approach we describe, we use finite elements rather than finite differences, and the pressure, heat and advection-dispersion equations are solved implicitly. The general idea is to put unconditional numerical stability of the time integration first, and let accuracy assume a secondary role. It is in this sense that the method is semi-quantiative. However

  15. A shorter time step for eco-friendly reservoir operation does not always produce better water availability and ecosystem benefits

    NASA Astrophysics Data System (ADS)

    Yu, Chunxue; Yin, Xin'an; Yang, Zhifeng; Cai, Yanpeng; Sun, Tao

    2016-09-01

    The time step used in the operation of eco-friendly reservoirs has decreased from monthly to daily, and even sub-daily. The shorter time step is considered a better choice for satisfying downstream environmental requirements because it more closely resembles the natural flow regime. However, little consideration has been given to the influence of different time steps on the ability to simultaneously meet human and environmental flow requirements. To analyze this influence, we used an optimization model to explore the relationships among the time step, environmental flow (e-flow) requirements, and human water needs for a wide range of time steps and e-flow scenarios. We used the degree of hydrologic alteration to evaluate the regime's ability to satisfy the e-flow requirements of riverine ecosystems, and used water supply reliability to evaluate the ability to satisfy human needs. We then applied the model to a case study of China's Tanghe Reservoir. We found four efficient time steps (2, 3, 4, and 5 days), with a remarkably high water supply reliability (around 80%) and a low alteration of the flow regime (<35%). Our analysis of the hydrologic alteration revealed the smallest alteration at time steps ranging from 1 to 7 days. However, longer time steps led to higher water supply reliability to meet human needs under several e-flow scenarios. Our results show that adjusting the time step is a simple way to improve reservoir operation performance to balance human and e-flow needs.

  16. Sensitivity of The High-resolution Wam Model With Respect To Time Step

    NASA Astrophysics Data System (ADS)

    Kasemets, K.; Soomere, T.

    The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave

  17. An extended doubly-adaptive quadrature method based on the combination of the Ninomiya and the FLR schemes

    NASA Astrophysics Data System (ADS)

    Hasegawa, Takemitsu; Hibino, Susumu; Hosoda, Yohsuke; Ninomiya, Ichizo

    2007-08-01

    An improvement is made to an automatic quadrature due to Ninomiya (J. Inf. Process. 3:162?170, 1980) of adaptive type based on the Newton?Cotes rule by incorporating a doubly-adaptive algorithm due to Favati, Lotti and Romani (ACM Trans. Math. Softw. 17:207?217, 1991; ACM Trans. Math. Softw. 17:218?232, 1991). We compare the present method in performance with some others by using various test problems including Kahaner?s ones (Computation of numerical quadrature formulas. In: Rice, J.R. (ed.) Mathematical Software, 229?259. Academic, Orlando, FL, 1971).

  18. Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential

    SciTech Connect

    Zhang Ying; Liang Haozhao; Meng Jie

    2009-08-26

    The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus {sup 12}C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.

  19. The Design, Implementation and Evaluation of a Pilot Scheme Adapted to the Bologna Goals at Tertiary Level

    ERIC Educational Resources Information Center

    Sanchez, Purificacion

    2009-01-01

    The Bologna Declaration attempts to reform the structure of the higher education system in forty-six European countries in a convergent way. By 2010, the European space for higher education should be completed. In the 2005-2006 academic year, the University of Murcia, Spain, started promoting initiatives to adapt individual modules and entire…

  20. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  1. A chimera grid scheme. [multiple overset body-conforming mesh system for finite difference adaptation to complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Dougherty, F. C.; Benek, J. A.

    1983-01-01

    A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.

  2. Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows

    NASA Technical Reports Server (NTRS)

    Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang

    2009-01-01

    The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.

  3. Electric and hybrid electric vehicle study utilizing a time-stepping simulation

    NASA Technical Reports Server (NTRS)

    Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.

    1992-01-01

    The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.

  4. Finite time step and spatial grid effects in δf simulation of warm plasmas

    NASA Astrophysics Data System (ADS)

    Sturdevant, Benjamin J.; Parker, Scott E.

    2016-01-01

    This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations using the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.

  5. Effect of spatial configuration of an extended nonlinear Kierstead-Slobodkin reaction-transport model with adaptive numerical scheme.

    PubMed

    Owolabi, Kolade M; Patidar, Kailash C

    2016-01-01

    In this paper, we consider the numerical simulations of an extended nonlinear form of Kierstead-Slobodkin reaction-transport system in one and two dimensions. We employ the popular fourth-order exponential time differencing Runge-Kutta (ETDRK4) schemes proposed by Cox and Matthew (J Comput Phys 176:430-455, 2002), that was modified by Kassam and Trefethen (SIAM J Sci Comput 26:1214-1233, 2005), for the time integration of spatially discretized partial differential equations. We demonstrate the supremacy of ETDRK4 over the existing exponential time differencing integrators that are of standard approaches and provide timings and error comparison. Numerical results obtained in this paper have granted further insight to the question 'What is the minimal size of the spatial domain so that the population persists?' posed by Kierstead and Slobodkin (J Mar Res 12:141-147, 1953), with a conclusive remark that the population size increases with the size of the domain. In attempt to examine the biological wave phenomena of the solutions, we present the numerical results in both one- and two-dimensional space, which have interesting ecological implications. Initial data and parameter values were chosen to mimic some existing patterns. PMID:27064984

  6. Robust rate-adaptive hybrid ARQ scheme for frequency-hopped spread-spectrum multiple-access communication systems

    NASA Astrophysics Data System (ADS)

    Bigloo, Amir M. Y.; Gulliver, T. Aaron; Wang, Q.; Bhargava, Vijay K.

    1994-06-01

    This paper considers the application of rate-adaptive coding (RAC) to a spread spectrum multiple access (SSMA) communication system. Specifically, RAC using a variable rate Reed-Solomon (RS) code with a single decoder is applied to frequency-hopped SSMA. We show that this combination can accommodate a larger number of users compared to that with conventional fixed-rate coding. This increase is a result of a reduction in the channel interference from other users. The penalty for this improvement in most cases is a slight increase in the delay (composed of propagation and decoding delay). The throughput and the undetected error probability for a Q-ary symmetric channel are analyzed, and performance results are presented.

  7. Coupling a local adaptive grid refinement technique with an interface sharpening scheme for the simulation of two-phase flow and free-surface flows using VOF methodology

    NASA Astrophysics Data System (ADS)

    Malgarinos, Ilias; Nikolopoulos, Nikolaos; Gavaises, Manolis

    2015-11-01

    This study presents the implementation of an interface sharpening scheme on the basis of the Volume of Fluid (VOF) method, as well as its application in a number of theoretical and real cases usually modelled in literature. More specifically, the solution of an additional sharpening equation along with the standard VOF model equations is proposed, offering the advantage of "restraining" interface numerical diffusion, while also keeping a quite smooth induced velocity field around the interface. This sharpening equation is solved right after volume fraction advection; however a novel method for its coupling with the momentum equation has been applied in order to save computational time. The advantages of the proposed sharpening scheme lie on the facts that a) it is mass conservative thus its application does not have a negative impact on one of the most important benefits of VOF method and b) it can be used in coarser grids as now the suppression of the numerical diffusion is grid independent. The coupling of the solved equation with an adaptive local grid refinement technique is used for further decrease of computational time, while keeping high levels of accuracy at the area of maximum interest (interface). The numerical algorithm is initially tested against two theoretical benchmark cases for interface tracking methodologies followed by its validation for the case of a free-falling water droplet accelerated by gravity, as well as the normal liquid droplet impingement onto a flat substrate. Results indicate that the coupling of the interface sharpening equation with the HRIC discretization scheme used for volume fraction flux term, not only decreases the interface numerical diffusion, but also allows the induced velocity field to be less perturbed owed to spurious velocities across the liquid-gas interface. With the use of the proposed algorithmic flow path, coarser grids can replace finer ones at the slight expense of accuracy.

  8. Classification Schemes: Developments and Survival.

    ERIC Educational Resources Information Center

    Pocock, Helen

    1997-01-01

    Discusses the growth, survival and future of library classification schemes. Concludes that to survive, a scheme must constantly update its policies, and readily adapt itself to accommodate growing disciplines and changing terminology. (AEF)

  9. Implicit lower-upper/approximate-factorization schemes for incompressible flows

    SciTech Connect

    Briley, W.R.; Neerarambam, S.S.; Whitfield, D.L.

    1996-10-01

    A lower-upper/approximate-factorization (LU/AF) scheme is developed for the incompressible Euler or Navier-Stokes equations. The LU/AF scheme contains an iteration parameter that can be adjusted to improve iterative convergence rate. The LU/AF scheme is to be used in conjunction with linearized implicit approximations and artificial compressibility to compute steady solutions, and within sub-iterations to compute unsteady solutions. Formulations based on time linearization with and without sub-iteration and on Newton linearization are developed using spatial difference operators. The spatial approximation used includes upwind differencing based on Roe`s approximate Riemann solver and van Leer`s MUSCL scheme, with numerically computed implicit flux linearizations. Simple one-dimensional diffusion and advection/diffusion problems are first studied analytically to provide insight for development of the Navier-Stokes algorithm. The optimal values of both time step and LU/AF parameter are determined for a test problem consisting of two-dimensional flow past a NACA 0012 airfoil, with a highly stretched grid. The optimal parameter provides a consistent improvement in convergence rate for four test cases having different grids and Reynolds numbers and, also, for an inviscid case. The scheme can be easily extended to three dimensions and adapted for compressible flows. 24 refs., 11 figs., 2 tabs.

  10. ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement simulations with adaptive ray tracing

    NASA Astrophysics Data System (ADS)

    Wise, John H.; Abel, Tom

    2011-07-01

    We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray-tracing scheme, and its parallel implementation into the adaptive mesh refinement cosmological hydrodynamics code ENZO. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilized to study a broad range of astrophysical problems, such as stellar and black hole feedback. Inaccuracies can arise from large time-steps and poor sampling; therefore, we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. We further test our method with more dynamical situations, for example, the propagation of an ionization front through a Rayleigh-Taylor instability, time-varying luminosities and collimated radiation. The test suite also includes an expanding H II region in a magnetized medium, utilizing the newly implemented magnetohydrodynamics module in ENZO. This method linearly scales with the number of point sources and number of grid cells. Our implementation is scalable to 512 processors on distributed memory machines and can include the radiation pressure and secondary ionizations from X-ray radiation. It is included in the newest public release of ENZO.

  11. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  12. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  13. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  14. Structural damage evolution assessment using the regularised time step integration method

    NASA Astrophysics Data System (ADS)

    Chen, Hua-Peng; Maung, Than Soe

    2014-09-01

    This paper presents an approach to identify both the location and severity evolution of damage in engineering structures directly from measured dynamic response data. A relationship between the change in structural parameters such as stiffness caused by structural damage development and the measured dynamic response data such as accelerations is proposed, on the basis of the governing equations of motion for the original and damaged structural systems. Structural damage parameters associated with time are properly chosen to reflect both the location and severity development over time of damage in a structure. Basic equations are provided to solve the chosen time-dependent damage parameters, which are constructed by using the Newmark time step integration method without requiring a modal analysis procedure. The Tikhonov regularisation method incorporating the L-curve criterion for determining the regularisation parameter is then employed to reduce the influence of measurement errors in dynamic response data and then to produce stable solutions for structural damage parameters. Results for two numerical examples with various simulated damage scenarios show that the proposed method can accurately identify the locations of structural damage and correctly assess the evolution of damage severity from information on vibration measurements with uncertainties.

  15. Space-time adaptive numerical methods for geophysical applications.

    PubMed

    Castro, C E; Käser, M; Toro, E F

    2009-11-28

    In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost. PMID:19840984

  16. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  17. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    NASA Technical Reports Server (NTRS)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  18. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows

    NASA Astrophysics Data System (ADS)

    Sjoegreen, Bjoern; Yee, Helen C.

    2002-11-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great

  19. An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Padgett, Jill M. A.; Ilie, Silvana

    2016-03-01

    Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.

  20. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    NASA Astrophysics Data System (ADS)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  1. Multifluid Block-Adaptive-Tree Solar Wind Roe-Type Upwind Scheme: Magnetospheric Composition and Dynamics During Geomagnetic Storms-Initial Results

    NASA Technical Reports Server (NTRS)

    Glocer, A.; Toth, G.; Ma, Y.; Gombosi, T.; Zhang, J.-C.; Kistler, L. M.

    2009-01-01

    The magnetosphere contains a significant amount of ionospheric O+, particularly during geomagnetically active times. The presence of ionospheric plasma in the magnetosphere has a notable impact on magnetospheric composition and processes. We present a new multifluid MHD version of the Block-Adaptive-Tree Solar wind Roe-type Upwind Scheme model of the magnetosphere to track the fate and consequences of ionospheric outflow. The multifluid MHD equations are presented as are the novel techniques for overcoming the formidable challenges associated with solving them. Our new model is then applied to the May 4, 1998 and March 31, 2001 geomagnetic storms. The results are juxtaposed with traditional single-fluid MHD and multispecies MHD simulations from a previous study, thereby allowing us to assess the benefits of using a more complex model with additional physics. We find that our multifluid MHD model (with outflow) gives comparable results to the multispecies MHD model (with outflow), including a more strongly negative Dst, reduced CPCP, and a drastically improved magnetic field at geosynchronous orbit, as compared to single-fluid MHD with no outflow. Significant differences in composition and magnetic field are found between the multispecies and multifluid approach further away from the Earth. We further demonstrate the ability to explore pressure and bulk velocity differences between H+ and O+, which is not possible when utilizing the other techniques considered

  2. Characterization of Energy Conservation in Primary Knock-On Atom Cascades: Ballistic Phase Effects on Variable Time Steps

    SciTech Connect

    Corrales, Louis R.; Devanathan, Ram

    2006-09-01

    Non-equilibrium molecular dynamics simulation trajectories must in principle conserve energy along the entire path. Processes exist in high-energy primary knock-on atom cascades that can affect the energy conservation, specifically during the ballistic phase where collisions bring atoms into very close proximities. The solution, in general, is to reduce the time step size of the simulation. This work explores the effects of variable time step algorithms and the effects of specifying a maximum displacement. The period of the ballistic phase can be well characterized by methods developed in this work to monitor the kinetic energy dissipation during a high-energy cascade.

  3. Extended generalized Lagrangian multipliers for magnetohydrodynamics using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Domingues, Margarete O.; Gomes, Anna Karina F.; Mendes, Odim; Schneider, Kai

    2013-10-01

    We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge-Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of the magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution. This work was supported by the contract SiCoMHD (ANR-Blanc 2011-045).

  4. BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.

    2010-12-01

    of both climate and ecosystems must be done at coarse grid resolutions; smaller domains require higher resolution for the simulation of natural resource processes at the landscape scale and that of on-the-ground management practices. Via a combined multi-agency and private conservation effort we have implemented a Nested Scale Experiment (NeScE) that ranges from 1/2 degree resolution (global, ca. 50 km) to ca. 8km (North America) and 800 m (conterminous U.S.). Our first DGVM, MC1, has been implemented at all 3 scales. We are just beginning to implement BIOMAP into NeScE, with its unique features, and daily time step, as a counterpoint to MC1. We believe it will be more accurate at all resolutions providing better simulations of vegetation distribution, carbon balance, runoff, fire regimes and drought impacts.

  5. Using a scale selective tendency filter and forward-backward time stepping to calculate consistent semi-Lagrangian trajectories

    NASA Astrophysics Data System (ADS)

    Alerskans, Emy; Kaas, Eigil

    2016-04-01

    In semi-Lagrangian models used for climate and NWP the trajectories are normally/often determined kinematically. Here we propose a new method for calculating trajectories in a more dynamically consistent way by pre-integrating the governing equations in a pseudo-Lagrangian manner using a short time step. Only non-advective adiabatic terms are included in this calculation, i.e., the Coriolis and pressure gradient force plus gravity in the momentum equations, and the divergence term in the continuity equation. This integration is performed with a forward-backward time step. Optionally, the tendencies are filtered with a local space filter, which reduces the phase speed of short wave gravity and sound waves. The filter relaxes the time step limitation related to high frequency oscillations without compromising locality of the solution. The filter can be considered as an alternative to less local or global semi-implicit solvers. Once trajectories are estimated over a complete long advective time step the full set of governing equations is stepped forward using these trajectories in combination with a flux form semi-Lagrangian formulation of the equations. The methodology is designed to improve consistency and scalability on massively parallel systems, although here it has only been verified that the technique produces realistic results in a shallow water model and a 2D model based on the full Euler equations.

  6. Composite centered schemes for multidimensional conservation laws

    SciTech Connect

    Liska, R.; Wendroff, B.

    1998-05-08

    The oscillations of a centered second order finite difference scheme and the excessive diffusion of a first order centered scheme can be overcome by global composition of the two, that is by performing cycles consisting of several time steps of the second order method followed by one step of the diffusive method. The authors show the effectiveness of this approach on some test problems in two and three dimensions.

  7. A PWM Buck Converter With Load-Adaptive Power Transistor Scaling Scheme Using Analog-Digital Hybrid Control for High Energy Efficiency in Implantable Biomedical Systems.

    PubMed

    Park, Sung-Yun; Cho, Jihyun; Lee, Kyuseok; Yoon, Euisik

    2015-12-01

    We report a pulse width modulation (PWM) buck converter that is able to achieve a power conversion efficiency (PCE) of > 80% in light loads 100 μA) for implantable biomedical systems. In order to achieve a high PCE for the given light loads, the buck converter adaptively reconfigures the size of power PMOS and NMOS transistors and their gate drivers in accordance with load currents, while operating at a fixed frequency of 1 MHz. The buck converter employs the analog-digital hybrid control scheme for coarse/fine adjustment of power transistors. The coarse digital control generates an approximate duty cycle necessary for driving a given load and selects an appropriate width of power transistors to minimize redundant power dissipation. The fine analog control provides the final tuning of the duty cycle to compensate for the error from the coarse digital control. The mode switching between the analog and digital controls is accomplished by a mode arbiter which estimates the average of duty cycles for the given load condition from limit cycle oscillations (LCO) induced by coarse adjustment. The fabricated buck converter achieved a peak efficiency of 86.3% at 1.4 mA and > 80% efficiency for a wide range of load conditions from 45 μA to 4.1 mA, while generating 1 V output from 2.5-3.3 V supply. The converter occupies 0.375 mm(2) in 0.18 μm CMOS processes and requires two external components: 1.2 μF capacitor and 6.8 μH inductor. PMID:26742139

  8. Design of optimally smoothing multi-stage schemes for the Euler equations

    NASA Technical Reports Server (NTRS)

    Van Leer, Bram; Tai, Chang-Hsien; Powell, Kenneth G.

    1989-01-01

    In this paper, a method is developed for designing multi-stage schemes that give optimal damping of high-frequencies for a given spatial-differencing operator. The objective of the method is to design schemes that combine well with multi-grid acceleration. The schemes are tested on a nonlinear scalar equation, and compared to Runge-Kutta schemes with the maximum stable time-step. The optimally smoothing schemes perform better than the Runge-Kutta schemes, even on a single grid. The analysis is extended to the Euler equations in one space-dimension by use of 'characteristic time-stepping', which preconditions the equations, removing stiffness due to variations among characteristic speeds. Convergence rates independent of the number of cells in the finest grid are achieved for transonic flow with and without a shock. Characteristic time-stepping is shown to be preferable to local time-stepping, although use of the optimally damping schemes appears to enhance the performance of local time-stepping. The extension of the analysis to the two-dimensional Euler equations is hampered by the lack of a model for characteristic time-stepping in two dimensions. Some results for local time-stepping are presented.

  9. Second-order Godunov-type scheme for reactive flow calculations on moving meshes

    NASA Astrophysics Data System (ADS)

    Azarenok, Boris N.; Tang, Tao

    2005-06-01

    The method of calculating the system of gas dynamics equations coupled with the chemical reaction equation is considered. The flow parameters are updated in whole without splitting the system into a hydrodynamical part and an ODE part. The numerical algorithm is based on the Godunov's scheme on deforming meshes with some modification to increase the scheme-order in time and space. The variational approach is applied to generate the moving adaptive mesh. At every time step the functional of smoothness, written on the graph of the control function, is minimized. The grid-lines are condensed in the vicinity of the main solution singularities, e.g., precursor shock, fire zones, intensive transverse shocks, and slip lines, which allows resolving a fine structure of the reaction domain. The numerical examples relating to the Chapman-Jouguet detonation and unstable overdriven detonation are considered in both one and two space dimensions.

  10. Simplified Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering

  11. Time-stepping methods for the simulation of the self-assembly of nano-crystals in MATLAB on a GPU

    NASA Astrophysics Data System (ADS)

    Korzec, M. D.; Ahnert, T.

    2013-10-01

    Partial differential equations describing the patterning of thin crystalline films are typically of fourth or sixth order, they are quasi- or semilinear and they are mostly defined on simple geometries such as rectangular domains. For the numerical simulation of these kinds of problems spectral methods are an efficient approach. We apply several implicit-explicit schemes to one recently derived PDE that we express in terms of coefficients of trigonometric interpolants. While the simplest IMEX scheme turns out to have the mildest step-size restriction, higher order SBDF schemes tend to be more unstable and exponential time integrators are fastest for the calculation of very accurate solutions. We implemented a reduced model in the EXPINT package syntax [3] and compared various exponential schemes. A convexity splitting approach was employed to stabilize the SBDF1 scheme. We show that accuracy control is crucial when using this idea, therefore we present a time-adaptive SBDF1/SBDF1-2-step method that yields convincing results reflecting the change in timescales during topological changes of the nanostructures. The implementation of all presented methods is carried out in MATLAB. We used the open source GPUmat package to gain up to 5-fold runtime benefits by carrying out calculations on a low-cost GPU without having to prescribe any knowledge in low-level programming or CUDA implementations and found comparable speedups as with MATLAB's PCT or with GPUmat run on Octave.

  12. Simplified Two-Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydorgen/Oxygen

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.

  13. Summary of Simplified Two Time Step Method for Calculating Combustion Rates and Nitrogen Oxide Emissions for Hydrogen/Air and Hydrogen/Oxygen

    NASA Technical Reports Server (NTRS)

    Marek, C. John; Molnar, Melissa

    2005-01-01

    A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.

  14. Mean square displacements with error estimates from non-equidistant time-step kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-06-01

    We present a method to calculate mean square displacements (MSD) with error estimates from kinetic Monte Carlo (KMC) simulations of diffusion processes with non-equidistant time-steps. An analytical solution for estimating the errors is presented for the special case of one moving particle at fixed rate constant. The method is generalized to an efficient computational algorithm that can handle any number of moving particles or different rates in the simulated system. We show with examples that the proposed method gives the correct statistical error when the MSD curve describes pure Brownian motion and can otherwise be used as an upper bound for the true error.

  15. Nonlinear wave propagation using three different finite difference schemes (category 2 application)

    NASA Technical Reports Server (NTRS)

    Pope, D. Stuart; Hardin, J. C.

    1995-01-01

    Three common finite difference schemes are used to examine the computation of one-dimensional nonlinear wave propagation. The schemes are studied for their responses to numerical parameters such as time step selection, boundary condition implementation, and discretization of governing equations. The performance of the schemes is compared and various numerical phenomena peculiar to each is discussed.

  16. An adaptive mesh method for phase-field simulation of alloy solidification in three dimensions

    NASA Astrophysics Data System (ADS)

    Bollada, P. C.; Jimack, P. K.; Mullis, A. M.

    2015-06-01

    We present our computational method for binary alloy solidification which takes advantage of high performance computing where up to 1024 cores are employed. Much of the simulation at a sufficiently fine resolution is possible on a modern 12 core PC and the 1024 core simulation is only necessary for very mature dendrite and convergence testing where high resolution puts extreme demands on memory. In outline, the method uses implicit time stepping in conjunction with an iterative solver, adaptive meshing and a scheme for dividing the work load across processors. We include three dimensional results for a Lewis number of 100 and a snapshot for a mature dendrite for a Lewis number of 40.

  17. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  18. The role of the time step and overshooting in the modelling of PMS evolution: The case of EK Cephei

    NASA Astrophysics Data System (ADS)

    Marques, J. P.; Fernandes, J.; Monteiro, M. J. P. F. G.

    2004-07-01

    EK Cephei (HD 206821) is a unique candidate to test predictions based on stellar evolutionary models. It is a double-lined detached eclipsing binary system with accurate absolute dimensions available and a precise determination of the metallicity. Most importantly for our work, its low mass (1.12 Msun) component appears to be in the pre-main sequence (PMS) phase. We have produced detailed evolutionary models of the binary EK Cep using the CESAM stellar evolution code (Morel \\cite{morel97}). A χ2-minimisation was performed to derive the most reliable set of modelling parameters (age, αA, αB and Yi). We have found that an evolutionary age of about 26.8 Myr fits both components in the same isochrone. The positions of EK Cep A and B in the HR diagram are consistent (within the observational uncertainties) with our results. Our revised calibration shows clearly that EK Cep A is in the beginning of the main sequence, while EK Cep B is indeed a PMS star. Such a combination allows for a precise age determination of the binary, and provides a strict test of the modelling. In particular we have found that the definition of the time step in calculating the PMS evolution is crucial to reproduce the observations. A discussion of the optimal time step for calculating PMS evolution is presented. The fitting to the radii of both components is a more difficult task; although we managed to do it for EK Cep B, EK Cep A has a lower radius than our best models. We further studied the effect of the inclusion of a moderate convective overshooting; the calibration of the binary is not significantly altered, but the effect of the inclusion of overshooting can be dramatic in the approach to the main sequence of stars with masses high enough to burn hydrogen through the CNO cycle on the main sequence.

  19. Motion estimation optimization in a MPEG-1-like video coding scheme for low-bit-rate applications

    NASA Astrophysics Data System (ADS)

    Roser, Miguel; Villegas, Paulo

    1994-05-01

    In this paper we present a work based on a coding algorithm for visual information that follows the International Standard ISO-IEC IS 11172, `Coding of Moving Pictures and Associated Audio for Digital Storage Media up to about 1.5 Mbit/s', widely known as MPEG1. The main intention in the definition of the MPEG 1 standard was to provide a large degree of flexibility to be used in many different applications. The interest of this paper is to adapt the MPEG 1 scheme for low bitrate operation and optimize it for special situations, as for example, a talking head with low movement, which is a usual situation in videotelephony application. An adapted and compatible MPEG 1 scheme, previously developed, able to operate at px8 Kbit/s will be used in this work. Looking for a low complexity scheme and taking into account that the most expensive (from the point of view of consumed computer time) step in the scheme is the motion estimation process (almost 80% of the total computer time is spent on the ME), an improvement of the motion estimation module based on the use of a new search pattern is presented in this paper.

  20. An efficient computational scheme for electronic excitation spectra of molecules in solution using the symmetry-adapted cluster-configuration interaction method: The accuracy of excitation energies and intuitive charge-transfer indices

    NASA Astrophysics Data System (ADS)

    Fukuda, Ryoichi; Ehara, Masahiro

    2014-10-01

    Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2'-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.

  1. Communication: Spin densities within a unitary group based spin-adapted open-shell coupled-cluster theory: Analytic evaluation of isotropic hyperfine-coupling constants for the combinatoric open-shell coupled-cluster scheme

    SciTech Connect

    Datta, Dipayan Gauss, Jürgen

    2015-07-07

    We report analytical calculations of isotropic hyperfine-coupling constants in radicals using a spin-adapted open-shell coupled-cluster theory, namely, the unitary group based combinatoric open-shell coupled-cluster (COSCC) approach within the singles and doubles approximation. A scheme for the evaluation of the one-particle spin-density matrix required in these calculations is outlined within the spin-free formulation of the COSCC approach. In this scheme, the one-particle spin-density matrix for an open-shell state with spin S and M{sub S} = + S is expressed in terms of the one- and two-particle spin-free (charge) density matrices obtained from the Lagrangian formulation that is used for calculating the analytic first derivatives of the energy. Benchmark calculations are presented for NO, NCO, CH{sub 2}CN, and two conjugated π-radicals, viz., allyl and 1-pyrrolyl in order to demonstrate the performance of the proposed scheme.

  2. An efficient computational scheme for electronic excitation spectra of molecules in solution using the symmetry-adapted cluster–configuration interaction method: The accuracy of excitation energies and intuitive charge-transfer indices

    SciTech Connect

    Fukuda, Ryoichi Ehara, Masahiro

    2014-10-21

    Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.

  3. Convergence Properties of a Class of Probabilistic Adaptive Schemes Called Sequential Reproductive Plans. Psychology and Education Series, Technical Report No. 210.

    ERIC Educational Resources Information Center

    Martin, Nancy

    Presented is a technical report concerning the use of a mathematical model describing certain aspects of the duplication and selection processes in natural genetic adaptation. This reproductive plan/model occurs in artificial genetics (the use of ideas from genetics to develop general problem solving techniques for computers). The reproductive…

  4. New Reduced Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2004-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes that are being developed at Glenn. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx were obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3

  5. Vortex-dominated conical-flow computations using unstructured adaptively-refined meshes

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1989-01-01

    A conical Euler/Navier-Stokes algorithm is presented for the computation of vortex-dominated flows. The flow solver involves a multistage Runge-Kutta time stepping scheme which uses a finite-volume spatial discretization on an unstructured grid made up of triangles. The algorithm also employs an adaptive mesh refinement procedure which enriches the mesh locally to more accurately resolve the vortical flow features. Results are presented for several highly-swept delta wing and circular cone cases at high angles of attack and at supersonic freestream flow conditions. Accurate solutions were obtained more efficiently when adaptive mesh refinement was used in contrast with refining the grid globally. The paper presents descriptions of the conical Euler/Navier-Stokes flow solver and adaptive mesh refinement procedures along with results which demonstrate the capability.

  6. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  7. Capacity planning for electronic waste management facilities under uncertainty: multi-objective multi-time-step model development.

    PubMed

    Poonam Khanijo Ahluwalia; Nema, Arvind K

    2011-07-01

    Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities). PMID:20935026

  8. Zonal multigrid solution of compressible flow problems on unstructured and adaptive meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1989-01-01

    The simultaneous use of adaptive meshing techniques with a multigrid strategy for solving the 2-D Euler equations in the context of unstructured meshes is studied. To obtain optimal efficiency, methods capable of computing locally improved solutions without recourse to global recalculations are pursued. A method for locally refining an existing unstructured mesh, without regenerating a new global mesh is employed, and the domain is automatically partitioned into refined and unrefined regions. Two multigrid strategies are developed. In the first, time-stepping is performed on a global fine mesh covering the entire domain, and convergence acceleration is achieved through the use of zonal coarse grid accelerator meshes, which lie under the adaptively refined regions of the global fine mesh. Both schemes are shown to produce similar convergence rates to each other, and also with respect to a previously developed global multigrid algorithm, which performs time-stepping throughout the entire domain, on each mesh level. However, the present schemes exhibit higher computational efficiency due to the smaller number of operations on each level.

  9. High-order time-stepping for nonlinear PDEs through rapid estimation of block Gaussian quadrature nodes

    NASA Astrophysics Data System (ADS)

    Lambers, James V.

    2016-06-01

    The stiffness of systems of ODEs that arise from spatial discretization of PDEs causes difficulties for both explicit and implicit time-stepping methods. Krylov Subspace Spectral (KSS) methods present a balance between the efficiency of explicit methods and the stability of implicit methods by computing each Fourier coefficient from an individualized approximation of the solution operator of the PDE. While KSS methods are explicit methods that exhibit a high order of accuracy and stability similar to that of implicit methods, their efficiency needs to be improved. Here, a detailed asymptotic study is performed in order to rapidly estimate all nodes, thus drastically reducing computational expense without sacrificing accuracy. Extension to PDEs on a disk, through expansions built on Legendre polynomials, is also discussed. Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff nonlinear systems of ODE, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this talk, it is proposed to modify EPI methods by using KSS methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. It is also demonstrated that the convergence of Krylov projection can be significantly accelerated, without noticeable loss of accuracy, through filtering techniques, thus improving performance and scalability even further.

  10. Adapting algebraic diagrammatic construction schemes for the polarization propagator to problems with multi-reference electronic ground states exploiting the spin-flip ansatz

    SciTech Connect

    Lefrancois, Daniel; Wormit, Michael; Dreuw, Andreas

    2015-09-28

    For the investigation of molecular systems with electronic ground states exhibiting multi-reference character, a spin-flip (SF) version of the algebraic diagrammatic construction (ADC) scheme for the polarization propagator up to third order perturbation theory (SF-ADC(3)) is derived via the intermediate state representation and implemented into our existing ADC computer program adcman. The accuracy of these new SF-ADC(n) approaches is tested on typical situations, in which the ground state acquires multi-reference character, like bond breaking of H{sub 2} and HF, the torsional motion of ethylene, and the excited states of rectangular and square-planar cyclobutadiene. Overall, the results of SF-ADC(n) reveal an accurate description of these systems in comparison with standard multi-reference methods. Thus, the spin-flip versions of ADC are easy-to-use methods for the calculation of “few-reference” systems, which possess a stable single-reference triplet ground state.

  11. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and

  12. An unconventional adaptation of a classical Gaussian plume dispersion scheme for the fast assessment of external irradiation from a radioactive cloud

    NASA Astrophysics Data System (ADS)

    Pecha, Petr; Pechova, Emilie

    2014-06-01

    This article focuses on derivation of an effective algorithm for the fast estimation of cloudshine doses/dose rates induced by a large mixture of radionuclides discharged into the atmosphere. A certain special modification of the classical Gaussian plume approach is proposed for approximation of the near-field dispersion problem. Specifically, the accidental radioactivity release is subdivided into consecutive one-hour Gaussian segments, each driven by a short-term meteorological forecast for the respective hours. Determination of the physical quantity of photon fluence rate from an ambient cloud irradiation is coupled to a special decomposition of the Gaussian plume shape into the equivalent virtual elliptic disks. It facilitates solution of the formerly used time-consuming 3-D integration and provides advantages with regard to acceleration of the computational process on a local scale. An optimal choice of integration limit is adopted on the basis of the mean free path of γ-photons in the air. An efficient approach is introduced for treatment of a wide range of energetic spectrum of the emitted photons when the usual multi-nuclide approach is replaced by a new multi-group scheme. The algorithm is capable of generating the radiological responses in a large net of spatial nodes. It predetermines the proposed procedure such as a proper tool for online data assimilation analysis in the near-field areas. A specific technique for numerical integration is verified on the basis of comparison with a partial analytical solution. Convergence of the finite cloud approximation to the tabulated semi-infinite cloud values for dose conversion factors was validated.

  13. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1991-01-01

    Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.

  14. An adaptive multigrid model for hurricane track prediction

    NASA Technical Reports Server (NTRS)

    Fulton, Scott R.

    1993-01-01

    This paper describes a simple numerical model for hurricane track prediction which uses a multigrid method to adapt the model resolution as the vortex moves. The model is based on the modified barotropic vorticity equation, discretized in space by conservative finite differences and in time by a Runge-Kutta scheme. A multigrid method is used to solve an elliptic problem for the streamfunction at each time step. Nonuniform resolution is obtained by superimposing uniform grids of different spatial extent; these grids move with the vortex as it moves. Preliminary numerical results indicate that the local mesh refinement allows accurate prediction of the hurricane track with substantially less computer time than required on a single uniform grid.

  15. A high-order discontinuous Galerkin method for fluid–structure interaction with efficient implicit–explicit time stepping

    SciTech Connect

    Froehle, Bradley Persson, Per-Olof

    2014-09-01

    We present a high-order accurate scheme for coupled fluid–structure interaction problems. The fluid is discretized using a discontinuous Galerkin method on unstructured tetrahedral meshes, and the structure uses a high-order volumetric continuous Galerkin finite element method. Standard radial basis functions are used for the mesh deformation. The time integration is performed using a partitioned approach based on implicit–explicit Runge–Kutta methods. The resulting scheme fully decouples the implicit solution procedures for the fluid and the solid parts, which we perform using two separate efficient parallel solvers. We demonstrate up to fifth order accuracy in time on a non-trivial test problem, on which we also show that additional subiterations are not required. We solve a benchmark problem of a cantilever beam in a shedding flow, and show good agreement with other results in the literature. Finally, we solve for the flow around a thin membrane at a high angle of attack in both 2D and 3D, and compare with the results obtained with a rigid plate.

  16. A modified implicit Monte Carlo method for time-dependent radiative transfer with adaptive material coupling

    SciTech Connect

    McClarren, Ryan G. Urbatsch, Todd J.

    2009-09-01

    In this paper we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method can avoid the nonphysical overheating that occurs in standard IMC when the time step is large. The method also leads to decreased noise in the material temperature at the cost of a potential increase in the radiation temperature noise.

  17. An Energy Decaying Scheme for Nonlinear Dynamics of Shells

    NASA Technical Reports Server (NTRS)

    Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.

  18. Accuracy of schemes with nonuniform meshes for compressible fluid flows

    NASA Technical Reports Server (NTRS)

    Turkel, E.

    1985-01-01

    The accuracy of the space discretization for time-dependent problems when a nonuniform mesh is used is considered. Many schemes reduce to first-order accuracy while a popular finite volume scheme is even inconsistent for general grids. This accuracy is based on physical variables. However, when accuracy is measured in computational variables then second-order accuracy can be obtained. This is meaningful only if the mesh accurately reflects the properties of the solution. In addition, the stability properties of some improved accurate schemes are analyzed and it can be shown that they also allow for larger time steps when Runge-Kutta type methods are used to advance in time.

  19. ADAPT model: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents an overview of the Agricultural Drainage and Pesticide Transport (ADAPT) model and a case study to illustrate the calibration and validation steps for predicting subsurface tile drainage and nitrate-N losses from an agricultural system. The ADAPT model is a daily time step field ...

  20. The PLUTO Code for Adaptive Mesh Computations in Astrophysical Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Mignone, A.; Zanni, C.; Tzeferacos, P.; van Straalen, B.; Colella, P.; Bodo, G.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  1. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  2. Building a better leapfrog. [an algorithm for ensuring time symmetry in any integration scheme

    NASA Technical Reports Server (NTRS)

    Hut, Piet; Makino, Jun; Mcmillan, Steve

    1995-01-01

    In stellar dynamical computer simulations, as well as other types of simulations using particles, time step size is often held constant in order to guarantee a high degree of energy conservation. In many applications, allowing the time step size to change in time can offer a great saving in computational cost, but variable-size time steps usually imply a substantial degradation in energy conservation. We present a meta-algorithm' for choosing time steps in such a way as to guarantee time symmetry in any integration scheme, thus allowing vastly improved energy conservation for orbital calculations with variable time steps. We apply the algorithm to the familiar leapfrog scheme, and generalize to higher order integration schemes, showing how the stability properties of the fixed-step leapfrog scheme can be extended to higher order, variable-step integrators such as the Hermite method. We illustrate the remarkable properties of these time-symmetric integrators for the case of a highly eccentric elliptical Kepler orbit and discuss applications to more complex problems.

  3. Compact integration factor methods for complex domains and adaptive mesh refinement

    PubMed Central

    Liu, Xinfeng; Nie, Qing

    2010-01-01

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed. PMID:20543883

  4. Time-stepping approach for solving upper-bound problems: Application to two-dimensional Rayleigh-Bénard convection.

    PubMed

    Wen, Baole; Chini, Gregory P; Kerswell, Rich R; Doering, Charles R

    2015-10-01

    An alternative computational procedure for numerically solving a class of variational problems arising from rigorous upper-bound analysis of forced-dissipative infinite-dimensional nonlinear dynamical systems, including the Navier-Stokes and Oberbeck-Boussinesq equations, is analyzed and applied to Rayleigh-Bénard convection. A proof that the only steady state to which this numerical algorithm can converge is the required global optimal of the relevant variational problem is given for three canonical flow configurations. In contrast with most other numerical schemes for computing the optimal bounds on transported quantities (e.g., heat or momentum) within the "background field" variational framework, which employ variants of Newton's method and hence require very accurate initial iterates, the new computational method is easy to implement and, crucially, does not require numerical continuation. The algorithm is used to determine the optimal background-method bound on the heat transport enhancement factor, i.e., the Nusselt number (Nu), as a function of the Rayleigh number (Ra), Prandtl number (Pr), and domain aspect ratio L in two-dimensional Rayleigh-Bénard convection between stress-free isothermal boundaries (Rayleigh's original 1916 model of convection). The result of the computation is significant because analyses, laboratory experiments, and numerical simulations have suggested a range of exponents α and β in the presumed Nu∼Pr(α)Ra(β) scaling relation. The computations clearly show that for Ra≤10(10) at fixed L=2√[2],Nu≤0.106Pr(0)Ra(5/12), which indicates that molecular transport cannot generally be neglected in the "ultimate" high-Ra regime. PMID:26565337

  5. Time-stepping approach for solving upper-bound problems: Application to two-dimensional Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Wen, Baole; Chini, Gregory P.; Kerswell, Rich R.; Doering, Charles R.

    2015-10-01

    An alternative computational procedure for numerically solving a class of variational problems arising from rigorous upper-bound analysis of forced-dissipative infinite-dimensional nonlinear dynamical systems, including the Navier-Stokes and Oberbeck-Boussinesq equations, is analyzed and applied to Rayleigh-Bénard convection. A proof that the only steady state to which this numerical algorithm can converge is the required global optimal of the relevant variational problem is given for three canonical flow configurations. In contrast with most other numerical schemes for computing the optimal bounds on transported quantities (e.g., heat or momentum) within the "background field" variational framework, which employ variants of Newton's method and hence require very accurate initial iterates, the new computational method is easy to implement and, crucially, does not require numerical continuation. The algorithm is used to determine the optimal background-method bound on the heat transport enhancement factor, i.e., the Nusselt number (Nu), as a function of the Rayleigh number (Ra), Prandtl number (Pr), and domain aspect ratio L in two-dimensional Rayleigh-Bénard convection between stress-free isothermal boundaries (Rayleigh's original 1916 model of convection). The result of the computation is significant because analyses, laboratory experiments, and numerical simulations have suggested a range of exponents α and β in the presumed Nu˜PrαRaβ scaling relation. The computations clearly show that for Ra≤1010 at fixed L =2 √{2 },Nu≤0.106 Pr0Ra5/12 , which indicates that molecular transport cannot generally be neglected in the "ultimate" high-Ra regime.

  6. Re-evaluation of an Optimized Second Order Backward Difference (BDF2OPT) Scheme for Unsteady Flow Applications

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.

    2009-01-01

    Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.

  7. Method For Model-Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Relatively simple method of model-reference adaptive control (MRAC) developed from two prior classes of MRAC techniques: signal-synthesis method and parameter-adaption method. Incorporated into unified theory, which yields more general adaptation scheme.

  8. A stable scheme for a nonlinear, multiphase tumor growth model with an elastic membrane

    PubMed Central

    Chen, Ying; Wise, Steven M.; Shenoy, Vivek B.; Lowengrub, John S.

    2014-01-01

    Summary In this paper, we extend the 3D multispecies diffuse-interface model of the tumor growth, which was derived in Wise et al. (Three-dimensional multispecies nonlinear tumor growth-I: model and numerical method, J. Theor. Biol. 253 (2008) 524–543), and incorporate the effect of a stiff membrane to model tumor growth in a confined microenvironment. We then develop accurate and efficient numerical methods to solve the model. When the membrane is endowed with a surface energy, the model is variational, and the numerical scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is demonstrably shown to be energy stable. Namely, in the absence of cell proliferation and death, the discrete energy is a nonincreasing function of time for any time and space steps. When a simplified model of membrane elastic energy is used, the resulting model is derived analogously to the surface energy case. However, the elastic energy model is actually nonvariational because certain coupling terms are neglected. Nevertheless, a very stable numerical scheme is developed following the strategy used in the surface energy case. 2D and 3D simulations are performed that demonstrate the accuracy of the algorithm and illustrate the shape instabilities and nonlinear effects of membrane elastic forces that may resist or enhance growth of the tumor. Compared with the standard Crank–Nicholson method, the time step can be up to 25 times larger using the new approach. PMID:24443369

  9. A stable scheme for a nonlinear, multiphase tumor growth model with an elastic membrane.

    PubMed

    Chen, Ying; Wise, Steven M; Shenoy, Vivek B; Lowengrub, John S

    2014-07-01

    In this paper, we extend the 3D multispecies diffuse-interface model of the tumor growth, which was derived in Wise et al. (Three-dimensional multispecies nonlinear tumor growth-I: model and numerical method, J. Theor. Biol. 253 (2008) 524-543), and incorporate the effect of a stiff membrane to model tumor growth in a confined microenvironment. We then develop accurate and efficient numerical methods to solve the model. When the membrane is endowed with a surface energy, the model is variational, and the numerical scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is demonstrably shown to be energy stable. Namely, in the absence of cell proliferation and death, the discrete energy is a nonincreasing function of time for any time and space steps. When a simplified model of membrane elastic energy is used, the resulting model is derived analogously to the surface energy case. However, the elastic energy model is actually nonvariational because certain coupling terms are neglected. Nevertheless, a very stable numerical scheme is developed following the strategy used in the surface energy case. 2D and 3D simulations are performed that demonstrate the accuracy of the algorithm and illustrate the shape instabilities and nonlinear effects of membrane elastic forces that may resist or enhance growth of the tumor. Compared with the standard Crank-Nicholson method, the time step can be up to 25 times larger using the new approach. PMID:24443369

  10. The GEMPAK Barnes objective analysis scheme

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Desjardins, M.; Kocin, P. J.

    1981-01-01

    GEMPAK, an interactive computer software system developed for the purpose of assimilating, analyzing, and displaying various conventional and satellite meteorological data types is discussed. The objective map analysis scheme possesses certain characteristics that allowed it to be adapted to meet the analysis needs GEMPAK. Those characteristics and the specific adaptation of the scheme to GEMPAK are described. A step-by-step guide for using the GEMPAK Barnes scheme on an interactive computer (in real time) to analyze various types of meteorological datasets is also presented.

  11. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  12. Comparison of Several Dissipation Algorithms for Central Difference Schemes

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Radespiel, R.; Turkel, E.

    1997-01-01

    Several algorithms for introducing artificial dissipation into a central difference approximation to the Euler and Navier Stokes equations are considered. The focus of the paper is on the convective upwind and split pressure (CUSP) scheme, which is designed to support single interior point discrete shock waves. This scheme is analyzed and compared in detail with scalar and matrix dissipation (MATD) schemes. Resolution capability is determined by solving subsonic, transonic, and hypersonic flow problems. A finite-volume discretization and a multistage time-stepping scheme with multigrid are used to compute solutions to the flow equations. Numerical results are also compared with either theoretical solutions or experimental data. For transonic airfoil flows the best accuracy on coarse meshes for aerodynamic coefficients is obtained with a simple MATD scheme.

  13. A high resolution upwind scheme for multi-component flows

    NASA Astrophysics Data System (ADS)

    Igra, D.; Takayama, K.

    2002-04-01

    Conservative schemes usually produce non-physical oscillations in multi-component flow solutions. Many methods were proposed to avoid these oscillations. Some of these correction schemes could fix these oscillations in the pressure profile at discontinuities, but the density profile still remained diffused between the two components. In the case of gas-liquid interfaces, density diffusion is not acceptable. In this paper, the interfacial correction scheme proposed by Cocchi et al. was modified to be used in conjunction with the level-set approach. After each time step two grid points that bound the interface are recalculated by using an exact Riemann solver so that pressure oscillations and the density diffusion at discontinuities were eliminated. The scheme presented here can be applied to any type of conservation law solver. Some examples solved by this scheme and their results are compared with the exact solution when available. Good agreement is obtained between the present results and the exact solutions. Copyright

  14. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    NASA Astrophysics Data System (ADS)

    Yu, Sungduk; Pritchard, Michael S.

    2015-12-01

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (˜10 W/m2) and longwave cloud forcing (˜5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. The results may also be useful for helping to tune them.

  15. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    DOE PAGESBeta

    Yu, Sungduk; Pritchard, Michael S.

    2015-12-17

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfullymore » satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less

  16. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    SciTech Connect

    Yu, Sungduk; Pritchard, Michael S.

    2015-12-17

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.

  17. Adaptive finite elements with high aspect ratio for the computation of coalescence using a phase-field model

    NASA Astrophysics Data System (ADS)

    Burman, E.; Jacot, A.; Picasso, M.

    2004-03-01

    A multiphase-field model for the description of coalescence in a binary alloy is solved numerically using adaptive finite elements with high aspect ratio. The unknown of the multiphase-field model are the three phase fields (solid phase 1, solid phase 2, and liquid phase), a Lagrange multiplier and the concentration field. An Euler implicit scheme is used for time discretization, together with continuous, piecewise linear finite elements. At each time step, a linear system corresponding to the three phases plus the Lagrange multiplier has to be solved. Then, the linear system pertaining to concentration is solved. An adaptive finite element algorithm is proposed. In order to reduce the number of mesh vertices, the generated meshes contain elements with high aspect ratio. The refinement and coarsening criteria are based on an error indicator which has already been justified theoretically for simpler problems. Numerical results on two test cases show the efficiency of the method.

  18. Adaptable DC offset correction

    NASA Technical Reports Server (NTRS)

    Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)

    2009-01-01

    Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.

  19. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  20. Adaptive Development

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).

  1. Stability analysis of intermediate boundary conditions in approximate factorization schemes

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr.; Hafez, M. M.; Gottlieb, D.

    1986-01-01

    The paper discusses the role of the intermediate boundary condition in the AF2 scheme used by Holst for simulation of the transonic full potential equation. It is shown that the treatment suggested by Holst led to a restriction on the time step and ways to overcome this restriction are suggested. The discussion is based on the theory developed by Gustafsson, Kreiss, and Sundstrom and also on the von Neumann method.

  2. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1993-01-01

    Given a function u(x) which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. We apply this multi-resolution analysis to Essentially Non-oscillatory Schemes (ENO) schemes in order to reduce the number of numerical flux computations which is needed in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. We present an efficient algorithm for implementing this program in the one-dimensional case; this algorithm can be extended to the multi-dimensional case with cartesian grids.

  3. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann; Usab, William J., Jr.

    1993-01-01

    A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  4. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann

    1993-01-01

    A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  5. Nonlinear secret image sharing scheme.

    PubMed

    Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively. PMID:25140334

  6. Nonlinear Secret Image Sharing Scheme

    PubMed Central

    Shin, Sang-Ho; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2⁡m⌉ bit-per-pixel (bpp), respectively. PMID:25140334

  7. Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli

    1997-01-01

    A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.

  8. A Split-Step Scheme for the Incompressible Navier-Stokes

    SciTech Connect

    Henshaw, W; Petersson, N A

    2001-06-12

    We describe a split-step finite-difference scheme for solving the incompressible Navier-Stokes equations on composite overlapping grids. The split-step approach decouples the solution of the velocity variables from the solution of the pressure. The scheme is based on the velocity-pressure formulation and uses a method of lines approach so that a variety of implicit or explicit time stepping schemes can be used once the equations have been discretized in space. We have implemented both second-order and fourth-order accurate spatial approximations that can be used with implicit or explicit time stepping methods. We describe how to choose appropriate boundary conditions to make the scheme accurate and stable. A divergence damping term is added to the pressure equation to keep the numerical dilatation small. Several numerical examples are presented.

  9. Progress with multigrid schemes for hypersonic flow problems

    NASA Technical Reports Server (NTRS)

    Radespiel, R.; Swanson, R. C.

    1991-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm uses upwind spatial discretization with explicit multistage time stepping. Two level versions of the various multigrid algorithms are applied to the two dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high aspect ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6) and Mach numbers up to 25.

  10. Progress with multigrid schemes for hypersonic flow problems

    SciTech Connect

    Radespiel, R.; Swanson, R.C.

    1995-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 X 10{sup 6} and Mach numbers up to 25. 32 refs., 31 figs., 1 tab.

  11. Generalized formulation of a class of explicit and implicit TVD schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.

    1985-01-01

    A one parameter family of second order explicit and implicit total variation diminishing (TVD) schemes is reformulated so that a simpler and wider group of limiters is included. The resulting scheme can be viewed as a symmetrical algorithm with a variety of numerical dissipation terms that are designed for weak solutions of hyperbolic problems. This is a generalization of Roe and Davis's recent works to a wider class of symmetric schemes other than Lax-Wendroff. The main properties of the present class of schemes are that they can be implicit, and when steady state calculations are sought, the numerical solution is independent of the time step.

  12. Runge-Kutta methods combined with compact difference schemes for the unsteady Euler equations

    NASA Technical Reports Server (NTRS)

    Yu, S. T.; Tsai, Y.-L. P.; Hsieh, K. C.

    1992-01-01

    An investigation of the Runge-Kutta time-stepping, combined with compact difference schemes to solve the unsteady Euler equations, is presented. Initially, a generalized form of a N-step Runge-Kutta technique is derived. By comparing this generalized form with its Taylor's series counterpart, the criteria for the three-step and four-step schemes to be of third- and fourth-order accurate are obtained.

  13. Low-dissipation and -dispersion Runge-Kutta schemes for computational acoustics

    NASA Technical Reports Server (NTRS)

    Hu, F. Q.; Hussaini, M. Y.; Manthey, J.

    1994-01-01

    In this paper, we investigate accurate and efficient time advancing methods for computational acoustics, where non-dissipative and non-dispersive properties are of critical importance. Our analysis pertains to the application of Runge-Kutta methods to high-order finite difference discretization. In many CFD applications multi-stage Runge-Kutta schemes have often been favored for their low storage requirements and relatively large stability limits. For computing acoustic waves, however, the stability consideration alone is not sufficient, since the Runge-Kutta schemes entail both dissipation and dispersion errors. The time step is now limited by the tolerable dissipation and dispersion errors in the computation. In the present paper, it is shown that if the traditional Runge-Kutta schemes are used for time advancing in acoustic problems, time steps greatly smaller than that allowed by the stability limit are necessary. Low-Dissipation and -Dispersion Runge-Kutta (LDDRE) schemes are proposed, based on an optimization that minimizes the dissipation and dispersion errors for wave propagation. Order optimizations of both single-step and two-step alternating schemes are considered. The proposed LDDRK schemes are remarkably more efficient than the classical Runge-Kutta schemes for acoustic computations. Moreover, low storage implementations of the optimized schemes are discussed. Special issues of implementing numerical boundary conditions in the LDDRK schemes are also addressed.

  14. Application of low dissipation and dispersion Runge-Kutta schemes to benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Hu, F. Q.; Hussaini, M. Y.; Manthey, J.

    1995-01-01

    We investigate accurate and efficient time advancing methods for computational aeroacoustics, where non-dissipative and non-dispersive properties are of critical importance. Our analysis pertains to the application of Runge-Kutta methods to high-order finite difference discretization. In many CFD applications, multi-stage Runge-Kutta schemes have often been favored for their low storage requirements and relatively large stability limits. For computing acoustic waves, however, the stability consideration alone is not sufficient, since the Runge-Kutta schemes entail both dissipation and dispersion errors. The time step is now limited by the tolerable dissipation and dispersion errors in the computation. In the present paper, it is shown that if the traditional Runge-Kutta schemes are used for time advancing in acoustic problems, time steps greatly smaller than that allowed by the stability limit are necessary. Low Dissipation and Dispersion Runge-Kutta (LDDRK) schemes are proposed, based on an optimization that minimizes the dissipation and dispersion errors for wave propagation. Optimizations of both single-step and two-step alternating schemes are considered. The proposed LDDRK schemes are remarkably more efficient than the classical Runge-Kutta schemes for acoustic computations. Numerical results of each Category of the Benchmark Problems are presented. Moreover, low storage implementations of the optimized schemes are discussed. Special issues of implementing numerical boundary conditions in the LDDRK schemes are also addressed.

  15. Finite-volume scheme for anisotropic diffusion

    NASA Astrophysics Data System (ADS)

    van Es, Bram; Koren, Barry; de Blank, Hugo J.

    2016-02-01

    In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.

  16. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  17. The basic function scheme of polynomial type

    SciTech Connect

    WU, Wang-yi; Lin, Guang

    2009-12-01

    A new numerical method---Basic Function Method is proposed. This method can directly discrete differential operator on unstructured grids. By using the expansion of basic function to approach the exact function, the central and upwind schemes of derivative are constructed. By using the second-order polynomial as basic function and applying the technique of flux splitting method and the combination of central and upwind schemes to suppress the non-physical fluctuation near the shock wave, the second-order basic function scheme of polynomial type for solving inviscid compressible flow numerically is constructed in this paper. Several numerical results of many typical examples for two dimensional inviscid compressible transonic and supersonic steady flow illustrate that it is a new scheme with high accuracy and high resolution for shock wave. Especially, combining with the adaptive remeshing technique, the satisfactory results can be obtained by these schemes.

  18. A parallel numerical simulation for supersonic flows using zonal overlapped grids and local time steps for common and distributed memory multiprocessors

    SciTech Connect

    Patel, N.R.; Sturek, W.B.; Hiromoto, R.

    1989-01-01

    Parallel Navier-Stokes codes are developed to solve both two- dimensional and three-dimensional flow fields in and around ramjet and nose tip configurations. A multi-zone overlapped grid technique is used to extend an explicit finite-difference method to more complicated geometries. Parallel implementations are developed for execution on both distributed and common-memory multiprocessor architectures. For the steady-state solutions, the use of the local time-step method has the inherent advantage of reducing the communications overhead commonly incurred by parallel implementations. Computational results of the codes are given for a series of test problems. The parallel partitioning of computational zones is also discussed. 5 refs., 18 figs.

  19. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  20. Application of power time-projection on the operator-splitting coupling scheme of the TRACE/S3K coupled code

    SciTech Connect

    Wicaksono, D.; Zerkak, O.; Nikitin, K.; Ferroukhi, H.; Chawla, R.

    2013-07-01

    This paper reports refinement studies on the temporal coupling scheme and time-stepping management of TRACE/S3K, a dynamically coupled code version of the thermal-hydraulics system code TRACE and the 3D core simulator Simulate-3K. The studies were carried out for two test cases, namely a PWR rod ejection accident and the Peach Bottom 2 Turbine Trip Test 2. The solution of the coupled calculation, especially the power peak, proves to be very sensitive to the time-step size with the currently employed conventional operator-splitting. Furthermore, a very small time-step size is necessary to achieve decent accuracy. This degrades the trade-off between accuracy and performance. A simple and computationally cheap implementation of time-projection of power has been shown to be able to improve the convergence of the coupled calculation. This scheme is able to achieve a prescribed accuracy with a larger time-step size. (authors)

  1. An Efficient Variable-Length Data-Compression Scheme

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Kiely, Aaron B.

    1996-01-01

    Adaptive variable-length coding scheme for compression of stream of independent and identically distributed source data involves either Huffman code or alternating run-length Huffman (ARH) code, depending on characteristics of data. Enables efficient compression of output of lossless or lossy precompression process, with speed and simplicity greater than those of older coding schemes developed for same purpose. In addition, scheme suitable for parallel implementation on hardware with modular structure, provides for rapid adaptation to changing data source, compatible with block orientation to alleviate memory requirements, ensures efficiency over wide range of entropy, and easily combined with such other communication schemes as those for containment of errors and for packetization.

  2. Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models

    NASA Astrophysics Data System (ADS)

    Ramli, Huda Mohd.; Esler, J. Gavin

    2016-07-01

    A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.

  3. A gas-kinetic BGK scheme for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    2000-01-01

    This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.

  4. An implicit midpoint difference scheme for the fractional Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Wang, Pengde; Huang, Chengming

    2016-05-01

    This paper proposes and analyzes an efficient difference scheme for the nonlinear complex Ginzburg-Landau equation involving fractional Laplacian. The scheme is based on the implicit midpoint rule for the temporal discretization and a weighted and shifted Grünwald difference operator for the spatial fractional Laplacian. By virtue of a careful analysis of the difference operator, some useful inequalities with respect to suitable fractional Sobolev norms are established. Then the numerical solution is shown to be bounded, and convergent in the lh2 norm with the optimal order O (τ2 +h2) with time step τ and mesh size h. The a priori bound as well as the convergence order holds unconditionally, in the sense that no restriction on the time step τ in terms of the mesh size h needs to be assumed. Numerical tests are performed to validate the theoretical results and effectiveness of the scheme.

  5. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of

  6. Adaptive spacetime method using Riemann jump conditions for coupled atomistic-continuum dynamics

    NASA Astrophysics Data System (ADS)

    Kraczek, B.; Miller, S. T.; Haber, R. B.; Johnson, D. D.

    2010-03-01

    We combine the Spacetime Discontinuous Galerkin (SDG) method for elastodynamics with the mathematically consistent Atomistic Discontinuous Galerkin (ADG) method in a new scheme that concurrently couples continuum and atomistic models of dynamic response in solids. The formulation couples non-overlapping continuum and atomistic models across sharp interfaces by weakly enforcing jump conditions, for both momentum balance and kinematic compatibility, using Riemann values to preserve the characteristic structure of the underlying hyperbolic system. Momentum balances to within machine-precision accuracy over every element, on each atom, and over the coupled system, with small, controllable energy dissipation in the continuum region that ensures numerical stability. When implemented on suitable unstructured spacetime grids, the continuum SDG model offers linear computational complexity in the number of elements and powerful adaptive analysis capabilities that readily bridge between atomic and continuum scales in both space and time. A special trace operator for the atomic velocities and an associated atomistic traction field enter the jump conditions at the coupling interface. The trace operator depends on parameters that specify, at the scale of the atomic spacing, the position of the coupling interface relative to the atoms. In a key finding, we demonstrate that optimizing these parameters suppresses spurious reflections at the coupling interface without the use of non-physical damping or special boundary conditions. We formulate the implicit SDG-ADG coupling scheme in up to three spatial dimensions, and describe an efficient iterative solution scheme that outperforms common explicit schemes, such as the Velocity Verlet integrator. Numerical examples, in 1d×time and employing both linear and nonlinear potentials, demonstrate the performance of the SDG-ADG method and show how adaptive spacetime meshing reconciles disparate time steps and resolves atomic-scale signals

  7. A time stepping coupled finite element-state space modeling environment for synchronous machine performance and design analysis in the ABC frame of reference

    NASA Astrophysics Data System (ADS)

    Deng, Fang

    This dissertation centers on the development of a modeling environment to predict the performance and operating characteristics of salient-pole synchronous generators. The model basically consists of an algorithm consisting of two sections, a time stepping two-dimensional (2D) magnetostatic field finite element (FE) computation algorithm coupled to a state-space (SS) time-domain model of the winding circuits. Hence the term time stepping Coupled Finite Element-State Space (CFE-SS) modeling environment is adopted for this approach. In the FE section, magnetic vector potential (MVP) based finite element (FE) formulations and computation of two-dimensional (2D) magnetostatic fields are used to get the magnetic field solutions throughout a machine's cross-section at a sequence (samplings) of rotor positions covering a complete (360 deg e) ac cycle. These field solutions yield the winding inductances by means of an energy and current perturbation method. The output of the FE section is the magnetic field solutions and the entire set of phase, field, damper, and sleeve winding inductance profiles versus rotor position, including all space harmonics due to rotor saliency, damper bar slotting, sleeve segmentation, stator slotting, and magnetic saturation. These inductance profiles are decomposed into their harmonic components by Fourier analysis. The magnetic field solutions and resulting winding inductances represent the key input data to the SS portion of the CFE-SS modeling environment. Laminated machine iron core loss calculations, which include the losses in the stator and rotor as well as pole face are subsequently performed using the magnetic field solution data. Conversely, the output of the SS portion is the entire set of phase, field, damper winding (circuit), and sleeve segment currents, which also include all the resulting time harmonics. These winding current results form in turn the key input data to the FE portion of the modeling environment which is

  8. New parallelizable schemes for integrating the Dissipative Particle Dynamics with Energy conservation.

    PubMed

    Homman, Ahmed-Amine; Maillet, Jean-Bernard; Roussel, Julien; Stoltz, Gabriel

    2016-01-14

    This work presents new parallelizable numerical schemes for the integration of dissipative particle dynamics with energy conservation. So far, no numerical scheme introduced in the literature is able to correctly preserve the energy over long times and give rise to small errors on average properties for moderately small time steps, while being straightforwardly parallelizable. We present in this article two new methods, both straightforwardly parallelizable, allowing to correctly preserve the total energy of the system. We illustrate the accuracy and performance of these new schemes both on equilibrium and nonequilibrium parallel simulations. PMID:26772559

  9. New parallelizable schemes for integrating the Dissipative Particle Dynamics with Energy conservation

    NASA Astrophysics Data System (ADS)

    Homman, Ahmed-Amine; Maillet, Jean-Bernard; Roussel, Julien; Stoltz, Gabriel

    2016-01-01

    This work presents new parallelizable numerical schemes for the integration of dissipative particle dynamics with energy conservation. So far, no numerical scheme introduced in the literature is able to correctly preserve the energy over long times and give rise to small errors on average properties for moderately small time steps, while being straightforwardly parallelizable. We present in this article two new methods, both straightforwardly parallelizable, allowing to correctly preserve the total energy of the system. We illustrate the accuracy and performance of these new schemes both on equilibrium and nonequilibrium parallel simulations.

  10. Stability analysis of pressure correction schemes for the Navier–Stokes equations with traction boundary conditions

    NASA Astrophysics Data System (ADS)

    Lee, Sanghyun; Salgado, Abner J.

    2016-09-01

    We present a stability analysis for two different rotational pressure correction schemes with open and traction boundary conditions. First, we provide a stability analysis for a rotational version of the grad-div stabilized scheme of [A. Bonito, J.-L. Guermond, and S. Lee. Modified pressure-correction projection methods: Open boundary and variable time stepping. In Numerical Mathematics and Advanced Applications - ENUMATH 2013, volume 103 of Lecture Notes in Computational Science and Engineering, pages 623-631. Springer, 2015]. This scheme turns out to be unconditionally stable, provided the stabilization parameter is suitably chosen. We also establish a conditional stability result for the boundary correction scheme presented in [E. Bansch. A finite element pressure correction scheme for the Navier-Stokes equations with traction boundary condition. Comput. Methods Appl. Mech. Engrg., 279:198-211, 2014]. These results are shown by employing the equivalence between stabilized gauge Uzawa methods and rotational pressure correction schemes with traction boundary conditions.

  11. Advection Scheme for Phase-changing Porous Media Flow of Fluids with Large Density Ratio

    NASA Astrophysics Data System (ADS)

    Zhang, Duan; Padrino, Juan

    2015-11-01

    Many flows in a porous media involve phase changes between fluids with a large density ratio. For instance, in the water-steam phase change the density ratio is about 1000. These phase changes can be results of physical changes, or chemical reactions, such as fuel combustion in a porous media. Based on the mass conservation, the velocity ratio between the fluids is of the same order of the density ratio. As the result the controlling Courant number for the time step in a numerical simulation is determined by the high velocity and low density phase, leading to small time steps. In this work we introduce a numerical approximation to increase the time step by taking advantage of the large density ratio. We provide analytical error estimation for this approximate numerical scheme. Numerical examples show that using this approximation about 40-fold speedup can be achieved at the cost of a few percent error. Work partially supported by LDRD project of LANL.

  12. The effect of time step, thermostat, and strain rate on ReaxFF simulations of mechanical failure in diamond, graphene, and carbon nanotube.

    PubMed

    Jensen, Benjamin D; Wise, Kristopher E; Odegard, Gregory M

    2015-08-01

    As the sophistication of reactive force fields for molecular modeling continues to increase, their use and applicability has also expanded, sometimes beyond the scope of their original development. Reax Force Field (ReaxFF), for example, was originally developed to model chemical reactions, but is a promising candidate for modeling fracture because of its ability to treat covalent bond cleavage. Performing reliable simulations of a complex process like fracture, however, requires an understanding of the effects that various modeling parameters have on the behavior of the system. This work assesses the effects of time step size, thermostat algorithm and coupling coefficient, and strain rate on the fracture behavior of three carbon-based materials: graphene, diamond, and a carbon nanotube. It is determined that the simulated stress-strain behavior is relatively independent of the thermostat algorithm, so long as coupling coefficients are kept above a certain threshold. Likewise, the stress-strain response of the materials was also independent of the strain rate, if it is kept below a maximum strain rate. Finally, the mechanical properties of the materials predicted by the Chenoweth C/H/O parameterization for ReaxFF are compared with literature values. Some deficiencies in the Chenoweth C/H/O parameterization for predicting mechanical properties of carbon materials are observed. PMID:26096628

  13. The relationship between Monte Carlo estimators of heterogeneity and error for daily to monthly time steps in a small Minnesota precipitation gauge network

    NASA Astrophysics Data System (ADS)

    Wright, Michael; Ferreira, Celso; Houck, Mark; Giovannettone, Jason

    2015-07-01

    Precipitation quantile estimates are used in engineering, agriculture, and a variety of other disciplines. Index flood regional frequency methods pool normalized gauge data in the case of homogeneity among the constituent gauges of the region. Unitless regional quantile estimates are outputted and rescaled at each gauge. Because violation of the homogeneity hypothesis is a major component of quantile estimation error in regional frequency analysis, heterogeneity estimators should be "reasonable proxies" of the error of quantile estimation. In this study, three Monte Carlo heterogeneity statistics tested in Hosking and Wallis (1997) are plotted against Monte Carlo estimates of quantile error for all five-or-more-gauge regionalizations in a 12 gauge network in the Twin Cities region of Minnesota. Upper-tail quantiles with nonexceedance probabilities of 0.75 and above are examined at time steps ranging from daily to monthly. A linear relationship between heterogeneity and error estimates is found and quantified using Pearson's r score. Two of Hosking and Wallis's (1997) heterogeneity measures, incorporating the coefficient of variation in one case and additionally the skewness in the other, are found to be reasonable proxies for quantile error at the L-moment ratio values characterizing these data. This result, in addition to confirming the utility of a commonly used coefficient of variation-based heterogeneity statistic, provides evidence for the utility of a heterogeneity measure that incorporates skewness information.

  14. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  15. Self-consistency based control scheme for magnetization dynamics

    SciTech Connect

    Albuquerque, G.; Miltat, J.; Thiaville, A.

    2001-06-01

    A numerical framework is presented for the solution of the Landau{endash}Lifshitz{endash}Gilbert equation of magnetization motion using a semi-implicit Crank{endash}Nicholson integration scheme. Along with the details of both space and time domain discretizations, we report on the development of a physically based self-consistency criterion that allows for a quantitative measurement of error in dynamic micromagnetic simulations. In essence, this criterion relies in recalculating from actual magnetization motion the imposed phenomenological damping constant. Test calculations were performed with special attention paid to the determination of suitable integration time steps. {copyright} 2001 American Institute of Physics.

  16. A New Low Dissipative High Order Schemes for MHD Equations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The goal of this talk is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls to limit the amount and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvilinear grids.

  17. Ranking Schemes in Hybrid Boolean Systems: A New Approach.

    ERIC Educational Resources Information Center

    Savoy, Jacques

    1997-01-01

    Suggests a new ranking scheme especially adapted for hypertext environments in order to produce more effective retrieval results and still use Boolean search strategies. Topics include Boolean ranking schemes; single-term indexing and term weighting; fuzzy set theory extension; and citation indexing. (64 references) (Author/LRW)

  18. Enriching User-Oriented Class Associations for Library Classification Schemes.

    ERIC Educational Resources Information Center

    Pu, Hsiao-Tieh; Yang, Chyan

    2003-01-01

    Explores the possibility of adding user-oriented class associations to hierarchical library classification schemes. Analyses a log of book circulation records from a university library in Taiwan and shows that classification schemes can be made more adaptable by analyzing circulation patterns of similar users. (Author/LRW)

  19. Diablo 2.0: A modern DNS/LES code for the incompressible NSE leveraging new time-stepping and multigrid algorithms

    NASA Astrophysics Data System (ADS)

    Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali

    2015-11-01

    We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.

  20. Numerical Modeling of Deep Mantle Convection: Advection and Diffusion Schemes for Marker Methods

    NASA Astrophysics Data System (ADS)

    Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard

    2013-04-01

    that we use for this study, the velocity field is discretised using second order triangular elements, which gives second order accuracy of interpolation from grid-nodes to markers. A fourth order Runge-Kutta solver is used to compute marker-trajectories. We reevaluate the velocity field for each of the intermediate steps of the ODE-solver, rendering our advection scheme to be fourth-order accurate in time. We compare two different approaches for performing the thermal diffusion step. In the first, more conventional approach, the energy equation is solved on a static grid. For this grid, we use first-order triangular elements and a higher resolution than for the velocity-grid, to compensate for the lower order elements. The temperature field is transferred between grid-nodes and markers, and a subgrid diffusion correction step (Gerya and Yuen, 2003) is included to account for the different spatial resolutions of the markers and the grid. In the second approach, the energy equation is solved directly on markers. To do this, we compute a constrained Delaunay triangulation, with markers as nodes, at every time step. We wish to resolve the large range of spatial scales of the solution at lowest possible computational cost. In several existing codes this is achieved with dynamically adaptive meshes, which use high resolution in regions with high solution gradients, and vice versa. The numerical scheme used in this study can be extended to include a similar feature, by regenerating the thermal and mechanical grids in the course of computation, adapting them to the temperature and chemistry fields carried by the markers. We present the results of thermochemical convection simulations obtained using the schemes outlined above, as well as the results of the numerical benchmarks commonly used in the geodynamics community. The quality of the solutions, as well as the computational cost of our schemes, are discussed.

  1. Identification Schemes from Key Encapsulation Mechanisms

    NASA Astrophysics Data System (ADS)

    Anada, Hiroaki; Arita, Seiko

    We propose a generic conversion from a key encapsulation mechanism (KEM) to an identification (ID) scheme. The conversion derives the security for ID schemes against concurrent man-in-the-middle (cMiM) attacks from the security for KEMs against adaptive chosen ciphertext attacks on one-wayness (one-way-CCA2). Then, regarding the derivation as a design principle of ID schemes, we develop a series of concrete one-way-CCA2 secure KEMs. We start with El Gamal KEM and prove it secure against non-adaptive chosen ciphertext attacks on one-wayness (one-way-CCA1) in the standard model. Then, we apply a tag framework with the algebraic trick of Boneh and Boyen to make it one-way-CCA2 secure based on the Gap-CDH assumption. Next, we apply the CHK transformation or a target collision resistant hash function to exit the tag framework. And finally, as it is better to rely on the CDH assumption rather than the Gap-CDH assumption, we apply the Twin DH technique of Cash, Kiltz and Shoup. The application is not “black box” and we do it by making the Twin DH technique compatible with the algebraic trick. The ID schemes obtained from our KEMs show the highest performance in both computational amount and message length compared with previously known ID schemes secure against concurrent man-in-the-middle attacks.

  2. AMR vs High Order Schemes Wavelets as a Guide

    SciTech Connect

    Jameson, L.

    2000-10-04

    The final goal behind any numerical method is give the smallest wall-clock time for a given final time error or, conversely, the smallest run-time error for a given wall clock time, etc. Here a comparison will be given between adaptive mesh refinement schemes and non-adaptive schemes of higher order. It will be shown that in three dimension calculations that in order for AMR schemes to be competitive that the finest scale must be restricted to an extremely, and unrealistic, small percentage of the computational domain.

  3. Recent progress on essentially non-oscillatory shock capturing schemes

    NASA Technical Reports Server (NTRS)

    Osher, Stanley; Shu, Chi-Wang

    1989-01-01

    An account is given of the construction of efficient implementations of 'essentially nonoscillatory' (ENO) schemes that approximate systems of hyperbolic conservation laws. ENO schemes use a local adaptive stencil to automatically obtain information from regions of smoothness when the solution develops discontinuities. Approximations employing ENOs can thereby obtain uniformly high accuracy to the very onset of discontinuities, while retaining a sharp and essentially nonoscillatory shock transition. For ease of implementation, ENO schemes applying the adaptive stencil concept to the numerical fluxes and employing a TVD Runge-Kutta-type time discretization are constructed.

  4. A Second-Order Iterative Implicit Explicit Hybrid Scheme for Hyperbolic Systems of Conservation Laws

    NASA Astrophysics Data System (ADS)

    Dai, Wenlong; Woodward, Paul R.

    1996-10-01

    An iterative implicit-explicit hybrid scheme is proposed for hyperbolic systems of conservation laws. Each wave in a system may be implicitly, or explicitly, or partially implicitly and partially explicitly treated depending on its associated Courant number in each numerical cell, and the scheme is able to smoothly switch between implicit and explicit calculations. The scheme is of Godunov-type in both explicit and implicit regimes, is in a strict conservation form, and is accurate to second-order in both space and time for all Courant numbers. The computer code for the scheme is easy to vectorize. Multicolors proposed in this paper may reduce the number of iterations required to reach a converged solution by several orders for a large time step. The feature of the scheme is shown through numerical examples.

  5. 3D Structured Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Banks, D. W.; Hafez, M. M.

    1996-01-01

    Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.

  6. High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza R.; Nishikawa, Hiroaki

    2014-01-01

    In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.

  7. Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes

    NASA Technical Reports Server (NTRS)

    Montarnal, Philippe; Shu, Chi-Wang

    1998-01-01

    In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.

  8. Analysis of triangular C-grid finite volume scheme for shallow water flows

    NASA Astrophysics Data System (ADS)

    Shirkhani, Hamidreza; Mohammadian, Abdolmajid; Seidou, Ousmane; Qiblawey, Hazim

    2015-08-01

    In this paper, a dispersion relation analysis is employed to investigate the finite volume triangular C-grid formulation for two-dimensional shallow-water equations. In addition, two proposed combinations of time-stepping methods with the C-grid spatial discretization are investigated. In the first part of this study, the C-grid spatial discretization scheme is assessed, and in the second part, fully discrete schemes are analyzed. Analysis of the semi-discretized scheme (i.e. only spatial discretization) shows that there is no damping associated with the spatial C-grid scheme, and its phase speed behavior is also acceptable for long and intermediate waves. The analytical dispersion analysis after considering the effect of time discretization shows that the Leap-Frog time stepping technique can improve the phase speed behavior of the numerical method; however it could not damp the shorter decelerated waves. The Adams-Bashforth technique leads to slower propagation of short and intermediate waves and it damps those waves with a slower propagating speed. The numerical solutions of various test problems also conform and are in good agreement with the analytical dispersion analysis. They also indicate that the Adams-Bashforth scheme exhibits faster convergence and more accurate results, respectively, when the spatial and temporal step size decreases. However, the Leap-Frog scheme is more stable with higher CFL numbers.

  9. Unsteady boundary layers with an intelligent numerical scheme

    NASA Astrophysics Data System (ADS)

    Cebeci, T.

    1986-02-01

    A numerical method has been developed to represent unsteady boundary layers with large flow reversal. It makes use of the characteristic box scheme which examines the finite-difference grid in relation to the magnitude and direction of local velocity and reaches and implements a decision to ensure that the Courant, Friedricks and Lewey stability criterion is not violated. The method has been applied to the problem of an impulsively started circular cylinder and the results, though generally consistent with those of van Dommelen and Shen obtained with a Lagrangian method, show some differences. The time step is identified as very important and, with the present intelligent numerical scheme, the results were readily extended to times far beyond those previously achieved with Eulerian methods. Extrapolation of the results suggests that the much-discussed singularity for this unsteady flow is the same as that of the corresponding steady flow.

  10. Tabled Execution in Scheme

    SciTech Connect

    Willcock, J J; Lumsdaine, A; Quinlan, D J

    2008-08-19

    Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.

  11. Dynamic remedial action scheme using online transient stability analysis

    NASA Astrophysics Data System (ADS)

    Shrestha, Arun

    Economic pressure and environmental factors have forced the modern power systems to operate closer to their stability limits. However, maintaining transient stability is a fundamental requirement for the operation of interconnected power systems. In North America, power systems are planned and operated to withstand the loss of any single or multiple elements without violating North American Electric Reliability Corporation (NERC) system performance criteria. For a contingency resulting in the loss of multiple elements (Category C), emergency transient stability controls may be necessary to stabilize the power system. Emergency control is designed to sense abnormal conditions and subsequently take pre-determined remedial actions to prevent instability. Commonly known as either Remedial Action Schemes (RAS) or as Special/System Protection Schemes (SPS), these emergency control approaches have been extensively adopted by utilities. RAS are designed to address specific problems, e.g. to increase power transfer, to provide reactive support, to address generator instability, to limit thermal overloads, etc. Possible remedial actions include generator tripping, load shedding, capacitor and reactor switching, static VAR control, etc. Among various RAS types, generation shedding is the most effective and widely used emergency control means for maintaining system stability. In this dissertation, an optimal power flow (OPF)-based generation-shedding RAS is proposed. This scheme uses online transient stability calculation and generator cost function to determine appropriate remedial actions. For transient stability calculation, SIngle Machine Equivalent (SIME) technique is used, which reduces the multimachine power system model to a One-Machine Infinite Bus (OMIB) equivalent and identifies critical machines. Unlike conventional RAS, which are designed using offline simulations, online stability calculations make the proposed RAS dynamic and adapting to any power system

  12. Unconditionally stable time marching scheme for Reynolds stress models

    NASA Astrophysics Data System (ADS)

    Mor-Yossef, Y.

    2014-11-01

    Progress toward a stable and efficient numerical treatment for the compressible Favre-Reynolds-averaged Navier-Stokes equations with a Reynolds-stress model (RSM) is presented. The mean-flow and the Reynolds stress model equations are discretized using finite differences on a curvilinear coordinates mesh. The convective flux is approximated by a third-order upwind biased MUSCL scheme. The diffusive flux is approximated using second-order central differencing, based on a full-viscous stencil. The novel time-marching approach relies on decoupled, implicit time integration, that is, the five mean-flow equations are solved separately from the seven Reynolds-stress closure equations. The key idea is the use of the unconditionally positive-convergent implicit scheme (UPC), originally developed for two-equation turbulence models. The extension of the UPC scheme for RSM guarantees the positivity of the normal Reynolds-stress components and the turbulence (specific) dissipation rate for any time step. Thanks to the UPC matrix-free structure and the decoupled approach, the resulting computational scheme is very efficient. Special care is dedicated to maintain the implicit operator compact, involving only nearest neighbor grid points, while fully supporting the larger discretized residual stencil. Results obtained from two- and three-dimensional numerical simulations demonstrate the significant progress achieved in this work toward optimally convergent solution of Reynolds stress models. Furthermore, the scheme is shown to be unconditionally stable and positive.

  13. Indirect visual cryptography scheme

    NASA Astrophysics Data System (ADS)

    Yang, Xiubo; Li, Tuo; Shi, Yishi

    2015-10-01

    Visual cryptography (VC), a new cryptographic scheme for image. Here in encryption, image with message is encoded to be N sub-images and any K sub-images can decode the message in a special rules (N>=2, 2<=K<=N). Then any K of the N sub-images are printed on transparency and stacked exactly, the message of original image will be decrypted by human visual system, but any K-1 of them get no information about it. This cryptographic scheme can decode concealed images without any cryptographic computations, and it has high security. But this scheme lacks of hidden because of obvious feature of sub-images. In this paper, we introduce indirect visual cryptography scheme (IVCS), which encodes sub-images to be pure phase images without visible strength based on encoding of visual cryptography. The pure phase image is final ciphertexts. Indirect visual cryptography scheme not only inherits the merits of visual cryptography, but also raises indirection, hidden and security. Meanwhile, the accuracy alignment is not required any more, which leads to the strong anti-interference capacity and robust in this scheme. System of decryption can be integrated highly and operated conveniently, and its process of decryption is dynamic and fast, which all lead to the good potentials in practices.

  14. Artifact reduction scheme for objects with complex geometry

    NASA Astrophysics Data System (ADS)

    Hayee, Sobia; Basart, John P.

    2000-05-01

    The technique of laminography is used to image different planes of interest in an object. A laminograph is obtained by shifting and aligning several radiographs. Boundaries of features at different focal planes are used for alignment. Hence the degree of accuracy of alignment depends on better edge or boundary detection of features. Since real radiographs always contain noise, therefore some kind of noise removal technique also has to be employed. The most common noise removal procedure of low-pass filtering results in the loss of important edge information. Hence to form a precise laminograph a noise reduction technique is required that reduces noise as well as preserves, if not enhances, edges. Nonlinear diffusion filtering provides such a technique. Nonlinear diffusion filtering is the solution of the nonlinear diffusion equation in which the diffusion coefficient is chosen such that it minimizes diffusion across the edges hence preserving them while the diffusion process in the interior of regions reduces noise. Usually, the numerical implementation of the nonlinear diffusion equation is done using explicit schemes. Such schemes impose strict restrictions on the time step-size for stability, and hence require numerous iterations, which leads to poor efficiency. If the semi-implicit scheme is used for the numerical solution of the nonlinear diffusion equation, the results are good and stable for all time step-sizes. The application of the nonlinear diffusion equation using the semi-implicit scheme on a radiograph results in noise reduction and edge enhancement, which in turn means a more precise laminograph. The effect of this technique on real images is shown in comparison to the conventional methods of noise reduction and edge detection.

  15. Coupling WRF double-moment 6-class microphysics schemes to RRTMG radiation scheme in weather research forecasting model

    DOE PAGESBeta

    Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny

    2016-01-01

    A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and itmore » is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.« less

  16. Coupling WRF double-moment 6-class microphysics schemes to RRTMG radiation scheme in weather research forecasting model

    SciTech Connect

    Bae, Soo Ya; Hong, Song -You; Lim, Kyo-Sun Sunny

    2016-01-01

    A method to explicitly calculate the effective radius of hydrometeors in the Weather Research Forecasting (WRF) double-moment 6-class (WDM6) microphysics scheme is designed to tackle the physical inconsistency in cloud properties between the microphysics and radiation processes. At each model time step, the calculated effective radii of hydrometeors from the WDM6 scheme are linked to the Rapid Radiative Transfer Model for GCMs (RRTMG) scheme to consider the cloud effects in radiative flux calculation. This coupling effect of cloud properties between the WDM6 and RRTMG algorithms is examined for a heavy rainfall event in Korea during 25–27 July 2011, and it is compared to the results from the control simulation in which the effective radius is prescribed as a constant value. It is found that the derived radii of hydrometeors in the WDM6 scheme are generally larger than the prescribed values in the RRTMG scheme. Consequently, shortwave fluxes reaching the ground (SWDOWN) are increased over less cloudy regions, showing a better agreement with a satellite image. The overall distribution of the 24-hour accumulated rainfall is not affected but its amount is changed. In conclusion, a spurious rainfall peak over the Yellow Sea is alleviated, whereas the local maximum in the central part of the peninsula is increased.

  17. Adaptive Dynamic Bayesian Networks

    SciTech Connect

    Ng, B M

    2007-10-26

    A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.

  18. Multiscale/fractional step schemes for the numerical simulation of the rotating shallow water flows with complex periodic topography

    NASA Astrophysics Data System (ADS)

    Jauberteau, F.; Temam, R. M.; Tribbia, J.

    2014-08-01

    In this paper, we study several multiscale/fractional step schemes for the numerical solution of the rotating shallow water equations with complex topography. We consider the case of periodic boundary conditions (f-plane model). Spatial discretization is obtained using a Fourier spectral Galerkin method. For the schemes presented in this paper we consider two approaches. The first approach (multiscale schemes) is based on topography scale separation and the numerical time integration is function of the scales. The second approach is based on a splitting of the operators, and the time integration method is function of the operator considered (fractional step schemes). The numerical results obtained are compared with the explicit reference scheme (Leap-Frog scheme). With these multiscale/fractional step schemes the objective is to propose new schemes giving numerical results similar to those obtained using only one uniform fine grid N×N and a time step Δt, but with a CPU time near the CPU time needed when using only one coarse grid N1×N1, N1time step Δt‧>Δt.

  19. Adaptive Force Control in Compliant Motion

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1994-01-01

    This paper addresses the problem of controlling a manipulator in compliant motion while in contact with an environment having an unknown stiffness. Two classes of solutions are discussed: adaptive admittance control and adaptive compliance control. In both admittance and compliance control schemes, compensator adaptation is used to ensure a stable and uniform system performance.

  20. Decoupled energy stable schemes for phase-field vesicle membrane model

    NASA Astrophysics Data System (ADS)

    Chen, Rui; Ji, Guanghua; Yang, Xiaofeng; Zhang, Hui

    2015-12-01

    We consider the numerical approximations of the classical phase-field vesicle membrane models proposed a decade ago in Du et al. (2004) [6]. We first reformulate the model derived from an energetic variational formulation into a form which is suitable for numerical approximation, and establish the energy dissipation law. Then, we develop a stabilized, decoupled, time discretization scheme for the coupled nonlinear system. The scheme is unconditionally energy stable and leads to linear and decoupled elliptic equations to be solved at each time step. Stability analysis and ample numerical simulations are presented thereafter.

  1. Performance and optimization of direct implicit time integration schemes for use in electrostatic particle simulation codes

    SciTech Connect

    Procassini, R.J.; Birdsall, C.K.; Morse, E.C.; Cohen, B.I.

    1988-01-01

    Implicit time integration schemes allow for the use of larger time steps than conventional explicit methods, thereby extending the applicability of kinetic particle simulation methods. This paper will describe a study of the performance and optimization of two such direct implicit schemes, which are used to follow the trajectories of charged particles in an electrostatic, particle-in-cell plasma simulation code. The direct implicit method that was used for this study is an alternative to the moment-equation implicit method. 10 refs., 7 figs., 4 tabs.

  2. Compact Spreader Schemes

    SciTech Connect

    Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.

    2014-07-25

    This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.

  3. Nonstandard finite difference schemes

    NASA Technical Reports Server (NTRS)

    Mickens, Ronald E.

    1995-01-01

    The major research activities of this proposal center on the construction and analysis of nonstandard finite-difference schemes for ordinary and partial differential equations. In particular, we investigate schemes that either have zero truncation errors (exact schemes) or possess other significant features of importance for numerical integration. Our eventual goal is to bring these methods to bear on problems that arise in the modeling of various physical, engineering, and technological systems. At present, these efforts are extended in the direction of understanding the exact nature of these nonstandard procedures and extending their use to more complicated model equations. Our presentation will give a listing (obtained to date) of the nonstandard rules, their application to a number of linear and nonlinear, ordinary and partial differential equations. In certain cases, numerical results will be presented.

  4. Identifying technology barriers in adapting a state-of-the-art gas turbine for IGCC applications and an experimental investigation of air extraction schemes for IGCC operations. Final report

    SciTech Connect

    Yang, Tah-teh; Agrawal, A.K.; Kapat, J.S.

    1993-06-01

    Under contracted work with Morgantown Energy Technology Center, Clemson University, the prime contractor, and General Electric (GE) and CRSS, the subcontractors, made a comprehensive study in the first phase of research to investigate the technology barriers of integrating a coal gasification process with a hot gas cleanup scheme and the state-of-the-art industrial gas turbine, the GE MS-7001F. This effort focused on (1) establishing analytical tools necessary for modeling combustion phenomenon and emissions in gas turbine combustors operating on multiple species coal gas, (2) estimates the overall performance of the GE MS-7001F combined cycle plant, (3) evaluating material issues in the hot gas path, (4) examining the flow and temperature fields when air extraction takes place at both the compressor exit and at the manhole adjacent to the combustor, and (5) examining the combustion/cooling limitations of such a gas turbine by using 3-D numerical simulation of a MS-7001F combustor operated with gasified coal. In the second phase of this contract, a 35% cool flow model was built similar to GE`s MS-7001F gas turbine for mapping the flow region between the compressor exit and the expander inlet. The model included sufficient details, such as the combustor`s transition pieces, the fuel nozzles, and the supporting struts. Four cases were studied: the first with a base line flow field of a GE 7001F without air extraction; the second with a GE 7001F with air extraction; and the third and fourth with a GE 7001F using a Griffith diffuser to replace the straight wall diffuser and operating without air extraction and with extraction, respectively.

  5. Check-Digit Schemes.

    ERIC Educational Resources Information Center

    Wheeler, Mary L.

    1994-01-01

    Discusses the study of identification codes and check-digit schemes as a way to show students a practical application of mathematics and introduce them to coding theory. Examples include postal service money orders, parcel tracking numbers, ISBN codes, bank identification numbers, and UPC codes. (MKR)

  6. From daily to sub-daily time steps - Creating a high temporal and spatial resolution climate reference data set for hydrological modeling and bias-correction of RCM data

    NASA Astrophysics Data System (ADS)

    Willkofer, Florian; Wood, Raul R.; Schmid, Josef; von Trentini, Fabian; Ludwig, Ralf

    2016-04-01

    The ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) focuses on the effects of climate change on hydro-meteorological extreme events and their implications for water management in Bavaria and Québec. It builds on the conjoint analysis of a large ensemble of the CRCM5, driven by 50 members of the CanESM2, and the latest information provided through the CORDEX-initiative, to better assess the influence of natural climate variability and climatic change on the dynamics of extreme events. A critical point in the entire project is the preparation of a meteorological reference dataset with the required temporal (1-6h) and spatial (500m) resolution to be able to better evaluate hydrological extreme events in mesoscale river basins. For Bavaria a first reference data set (daily, 1km) used for bias-correction of RCM data was created by combining raster based data (E-OBS [1], HYRAS [2], MARS [3]) and interpolated station data using the meteorological interpolation schemes of the hydrological model WaSiM [4]. Apart from the coarse temporal and spatial resolution, this mosaic of different data sources is considered rather inconsistent and hence, not applicable for modeling of hydrological extreme events. Thus, the objective is to create a dataset with hourly data of temperature, precipitation, radiation, relative humidity and wind speed, which is then used for bias-correction of the RCM data being used as driver for hydrological modeling in the river basins. Therefore, daily data is disaggregated to hourly time steps using the 'Method of fragments' approach [5], based on available training stations. The disaggregation chooses fragments of daily values from observed hourly datasets, based on similarities in magnitude and behavior of previous and subsequent events. The choice of a certain reference station (hourly data, provision of fragments) for disaggregating daily station data (application

  7. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  8. A numerical scheme for ionizing shock waves

    SciTech Connect

    Aslan, Necdet . E-mail: naslan@yeditepe.edu.tr; Mond, Michael

    2005-12-10

    A two-dimensional (2D) visual computer code to solve the steady state (SS) or transient shock problems including partially ionizing plasma is presented. Since the flows considered are hypersonic and the resulting temperatures are high, the plasma is partially ionized. Hence the plasma constituents are electrons, ions and neutral atoms. It is assumed that all the above species are in thermal equilibrium, namely, that they all have the same temperature. The ionization degree is calculated from Saha equation as a function of electron density and pressure by means of a nonlinear Newton type root finding algorithms. The code utilizes a wave model and numerical fluctuation distribution (FD) scheme that runs on structured or unstructured triangular meshes. This scheme is based on evaluating the mesh averaged fluctuations arising from a number of waves and distributing them to the nodes of these meshes in an upwind manner. The physical properties (directions, strengths, etc.) of these wave patterns are obtained by a new wave model: ION-A developed from the eigen-system of the flux Jacobian matrices. Since the equation of state (EOS) which is used to close up the conservation laws includes electronic effects, it is a nonlinear function and it must be inverted by iterations to determine the ionization degree as a function of density and temperature. For the time advancement, the scheme utilizes a multi-stage Runge-Kutta (RK) algorithm with time steps carefully evaluated from the maximum possible propagation speed in the solution domain. The code runs interactively with the user and allows to create different meshes to use different initial and boundary conditions and to see changes of desired physical quantities in the form of color and vector graphics. The details of the visual properties of the code has been published before (see [N. Aslan, A visual fluctuation splitting scheme for magneto-hydrodynamics with a new sonic fix and Euler limit, J. Comput. Phys. 197 (2004) 1

  9. Beyond first-order finite element schemes in micromagnetics

    SciTech Connect

    Kritsikis, E.; Vaysset, A.; Buda-Prejbeanu, L.D.; Toussaint, J.-C.

    2014-01-01

    Magnetization dynamics in ferromagnetic materials is ruled by the Landau–Lifshitz–Gilbert equation (LLG). Reliable schemes must conserve the magnetization norm, which is a nonconvex constraint, and be energy-decreasing unless there is pumping. Some of the authors previously devised a convergent finite element scheme that, by choice of an appropriate test space – the tangent plane to the magnetization – reduces to a linear problem at each time step. The scheme was however first-order in time. We claim it is not an intrinsic limitation, and the same approach can lead to efficient micromagnetic simulation. We show how the scheme order can be increased, and the nonlocal (magnetostatic) interactions be tackled in logarithmic time, by the fast multipole method or the non-uniform fast Fourier transform. Our implementation is called feeLLGood. A test-case of the National Institute of Standards and Technology is presented, then another one relevant to spin-transfer effects (the spin-torque oscillator)

  10. Beyond first-order finite element schemes in micromagnetics

    NASA Astrophysics Data System (ADS)

    Kritsikis, E.; Vaysset, A.; Buda-Prejbeanu, L. D.; Alouges, F.; Toussaint, J.-C.

    2014-01-01

    Magnetization dynamics in ferromagnetic materials is ruled by the Landau-Lifshitz-Gilbert equation (LLG). Reliable schemes must conserve the magnetization norm, which is a nonconvex constraint, and be energy-decreasing unless there is pumping. Some of the authors previously devised a convergent finite element scheme that, by choice of an appropriate test space - the tangent plane to the magnetization - reduces to a linear problem at each time step. The scheme was however first-order in time. We claim it is not an intrinsic limitation, and the same approach can lead to efficient micromagnetic simulation. We show how the scheme order can be increased, and the nonlocal (magnetostatic) interactions be tackled in logarithmic time, by the fast multipole method or the non-uniform fast Fourier transform. Our implementation is called feeLLGood. A test-case of the National Institute of Standards and Technology is presented, then another one relevant to spin-transfer effects (the spin-torque oscillator).

  11. Implicit scheme for Maxwell equations solution in case of flat 3D domains

    NASA Astrophysics Data System (ADS)

    Boronina, Marina; Vshivkov, Vitaly

    2016-02-01

    We present a new finite-difference scheme for Maxwell's equations solution for three-dimensional domains with different scales in different directions. The stability condition of the standard leap-frog scheme requires decreasing of the time-step with decreasing of the minimal spatial step, which depends on the minimal domain size. We overcome the conditional stability by modifying the standard scheme adding implicitness in the direction of the smallest size. The new scheme satisfies the Gauss law for the electric and magnetic fields in the final- differences. The approximation order, the maintenance of the wave amplitude and propagation speed, the invariance of the wave propagation on angle with the coordinate axes are analyzed.

  12. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  13. Discrete unified gas kinetic scheme for all Knudsen number flows: low-speed isothermal case.

    PubMed

    Guo, Zhaoli; Xu, Kun; Wang, Ruijie

    2013-09-01

    Based on the Boltzmann-BGK (Bhatnagar-Gross-Krook) equation, in this paper a discrete unified gas kinetic scheme (DUGKS) is developed for low-speed isothermal flows. The DUGKS is a finite-volume scheme with the discretization of particle velocity space. After the introduction of two auxiliary distribution functions with the inclusion of collision effect, the DUGKS becomes a fully explicit scheme for the update of distribution function. Furthermore, the scheme is an asymptotic preserving method, where the time step is only determined by the Courant-Friedricks-Lewy condition in the continuum limit. Numerical results demonstrate that accurate solutions in both continuum and rarefied flow regimes can be obtained from the current DUGKS. The comparison between the DUGKS and the well-defined lattice Boltzmann equation method (D2Q9) is presented as well. PMID:24125383

  14. Adaptive control of a Stewart platform-based manipulator

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Antrazi, Sami S.; Zhou, Zhen-Lei; Campbell, Charles E., Jr.

    1993-01-01

    A joint-space adaptive control scheme for controlling noncompliant motion of a Stewart platform-based manipulator (SPBM) was implemented in the Hardware Real-Time Emulator at Goddard Space Flight Center. The six-degrees of freedom SPBM uses two platforms and six linear actuators driven by dc motors. The adaptive control scheme is based on proportional-derivative controllers whose gains are adjusted by an adaptation law based on model reference adaptive control and Liapunov direct method. It is concluded that the adaptive control scheme provides superior tracking capability as compared to fixed-gain controllers.

  15. Hybridization schemes for clusters

    NASA Astrophysics Data System (ADS)

    Wales, David J.

    The concept of an optimum hybridization scheme for cluster compounds is developed with particular reference to electron counting. The prediction of electron counts for clusters and the interpretation of the bonding is shown to depend critically upon the presumed hybridization pattern of the cluster vertex atoms. This fact has not been properly appreciated in previous work, particularly in applications of Stone's tensor surface harmonic (TSH) theory, but is found to be a useful tool when dealt with directly. A quantitative definition is suggested for the optimum cluster hybridization pattern based directly upon the ease of interpretation of the molecular orbitals, and results are given for a range of species. The relationship of this scheme to the detailed cluster geometry is described using Löwdin's partitioned perturbation theory, and the success and range of application of TSH theory are discussed.

  16. An expert system based intelligent control scheme for space bioreactors

    NASA Technical Reports Server (NTRS)

    San, Ka-Yiu

    1988-01-01

    An expert system based intelligent control scheme is being developed for the effective control and full automation of bioreactor systems in space. The scheme developed will have the capability to capture information from various resources including heuristic information from process researchers and operators. The knowledge base of the expert system should contain enough expertise to perform on-line system identification and thus be able to adapt the controllers accordingly with minimal human supervision.

  17. Constrained Self-adaptive Solutions Procedures for Structure Subject to High Temperature Elastic-plastic Creep Effects

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1983-01-01

    This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.

  18. Beyond Scheme F

    SciTech Connect

    Elliott, C.J.; Fisher, H.; Pepin, J.; Gillmann, R.

    1996-07-01

    Traffic classification techniques were evaluated using data from a 1993 investigation of the traffic flow patterns on I-20 in Georgia. First we improved the data by sifting through the data base, checking against the original video for questionable events and removing and/or repairing questionable events. We used this data base to critique the performance quantitatively of a classification method known as Scheme F. As a context for improving the approach, we show in this paper that scheme F can be represented as a McCullogh-Pitts neural network, oar as an equivalent decomposition of the plane. We found that Scheme F, among other things, severely misrepresents the number of vehicles in Class 3 by labeling them as Class 2. After discussing the basic classification problem in terms of what is measured, and what is the desired prediction goal, we set forth desirable characteristics of the classification scheme and describe a recurrent neural network system that partitions the high dimensional space up into bins for each axle separation. the collection of bin numbers, one for each of the axle separations, specifies a region in the axle space called a hyper-bin. All the vehicles counted that have the same set of in numbers are in the same hyper-bin. The probability of the occurrence of a particular class in that hyper- bin is the relative frequency with which that class occurs in that set of bin numbers. This type of algorithm produces classification results that are much more balanced and uniform with respect to Classes 2 and 3 and Class 10. In particular, the cancellation of errors of classification that occurs is for many applications the ideal classification scenario. The neural network results are presented in the form of a primary classification network and a reclassification network, the performance matrices for which are presented.

  19. Comparison of Implicit Schemes to Solve Equations of Radiation Hydrodynamics with a Flux-limited Diffusion Approximation: Newton--Raphson, Operator Splitting, and Linearization

    NASA Astrophysics Data System (ADS)

    Tetsu, Hiroyuki; Nakamoto, Taishi

    2016-03-01

    Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton-Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme, we examined the scheme developed by Douglas & Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.

  20. High Order Finite Volume Nonlinear Schemes for the Boltzmann Transport Equation

    SciTech Connect

    Bihari, B L; Brown, P N

    2005-03-29

    The authors apply the nonlinear WENO (Weighted Essentially Nonoscillatory) scheme to the spatial discretization of the Boltzmann Transport Equation modeling linear particle transport. The method is a finite volume scheme which ensures not only conservation, but also provides for a more natural handling of boundary conditions, material properties and source terms, as well as an easier parallel implementation and post processing. It is nonlinear in the sense that the stencil depends on the solution at each time step or iteration level. By biasing the gradient calculation towards the stencil with smaller derivatives, the scheme eliminates the Gibb's phenomenon with oscillations of size O(1) and reduces them to O(h{sup r}), where h is the mesh size and r is the order of accuracy. The current implementation is three-dimensional, generalized for unequally spaced meshes, fully parallelized, and up to fifth order accurate (WENO5) in space. For unsteady problems, the resulting nonlinear spatial discretization yields a set of ODE's in time, which in turn is solved via high order implicit time-stepping with error control. For the steady-state case, they need to solve the non-linear system, typically by Newton-Krylov iterations. There are several numerical examples presented to demonstrate the accuracy, non-oscillatory nature and efficiency of these high order methods, in comparison with other fixed-stencil schemes.

  1. Multi-dimensional ENO schemes for general geometries

    NASA Technical Reports Server (NTRS)

    Harten, Ami; Chakravarthy, Sukumar R.

    1991-01-01

    A class of ENO schemes is presented for the numerical solution of multidimensional hyperbolic systems of conservation laws in structured and unstructured grids. This is a class of shock-capturing schemes which are designed to compute cell-averages to high order accuracy. The ENO scheme is composed of a piecewise-polynomial reconstruction of the solution form its given cell-averages, approximate evolution of the resulting initial value problem, and averaging of this approximate solution over each cell. The reconstruction algorithm is based on an adaptive selection of stencil for each cell so as to avoid spurious oscillations near discontinuities while achieving high order of accuracy away from them.

  2. Implicit schemes and parallel computing in unstructured grid CFD

    NASA Technical Reports Server (NTRS)

    Venkatakrishnam, V.

    1995-01-01

    The development of implicit schemes for obtaining steady state solutions to the Euler and Navier-Stokes equations on unstructured grids is outlined. Applications are presented that compare the convergence characteristics of various implicit methods. Next, the development of explicit and implicit schemes to compute unsteady flows on unstructured grids is discussed. Next, the issues involved in parallelizing finite volume schemes on unstructured meshes in an MIMD (multiple instruction/multiple data stream) fashion are outlined. Techniques for partitioning unstructured grids among processors and for extracting parallelism in explicit and implicit solvers are discussed. Finally, some dynamic load balancing ideas, which are useful in adaptive transient computations, are presented.

  3. Simulation of transients in natural gas pipelines using hybrid TVD schemes

    NASA Astrophysics Data System (ADS)

    Zhou, Junyang; Adewumi, Michael A.

    2000-02-01

    The mathematical model describing transients in natural gas pipelines constitutes a non-homogeneous system of non-linear hyperbolic conservation laws. The time splitting approach is adopted to solve this non-homogeneous hyperbolic model. At each time step, the non-homogeneous hyperbolic model is split into a homogeneous hyperbolic model and an ODE operator. An explicit 5-point, second-order-accurate total variation diminishing (TVD) scheme is formulated to solve the homogeneous system of non-linear hyperbolic conservation laws. Special attention is given to the treatment of boundary conditions at the inlet and the outlet of the pipeline. Hybrid methods involving the Godunov scheme (TVD/Godunov scheme) or the Roe scheme (TVD/Roe scheme) or the Lax-Wendroff scheme (TVD/LW scheme) are used to achieve appropriate boundary handling strategy. A severe condition involving instantaneous closure of a downstream valve is used to test the efficacy of the new schemes. The results produced by the TVD/Roe and TVD/Godunov schemes are excellent and comparable with each other, while the TVD/LW scheme performs reasonably well. The TVD/Roe scheme is applied to simulate the transport of a fast transient in a short pipe and the propagation of a slow transient in a long transmission pipeline. For the first example, the scheme produces excellent results, which capture and maintain the integrity of the wave fronts even after a long time. For the second example, comparisons of computational results are made using different discretizing parameters. Copyright

  4. Implicit and explicit schemes for mass consistency preservation in hybrid particle/finite-volume algorithms for turbulent reactive flows

    SciTech Connect

    Popov, Pavel P. Pope, Stephen B.

    2014-01-15

    This work addresses the issue of particle mass consistency in Large Eddy Simulation/Probability Density Function (LES/PDF) methods for turbulent reactive flows. Numerical schemes for the implicit and explicit enforcement of particle mass consistency (PMC) are introduced, and their performance is examined in a representative LES/PDF application, namely the Sandia–Sydney Bluff-Body flame HM1. A new combination of interpolation schemes for velocity and scalar fields is found to better satisfy PMC than multilinear and fourth-order Lagrangian interpolation. A second-order accurate time-stepping scheme for stochastic differential equations (SDE) is found to improve PMC relative to Euler time stepping, which is the first time that a second-order scheme is found to be beneficial, when compared to a first-order scheme, in an LES/PDF application. An explicit corrective velocity scheme for PMC enforcement is introduced, and its parameters optimized to enforce a specified PMC criterion with minimal corrective velocity magnitudes.

  5. Extended dielectric relaxation scheme for fluid transport simulations of high density plasma discharges

    NASA Astrophysics Data System (ADS)

    Kwon, Deuk-Chul; Song, Mi-Young; Yoon, Jung-Sik

    2014-10-01

    It is well known that the dielectric relaxation scheme (DRS) can efficiently overcome the limitation on the simulation time step for fluid transport simulations of high density plasma discharges. By imitating a realistic and physical shielding process of electric field perturbation, the DRS overcomes the dielectric limitation on time step. However, the electric field was obtained with assuming the drift-diffusion approximation. Although the drift-diffusion expressions are good approximations for both the electrons and ions at high pressure, the inertial term cannot be neglected in the ion momentum equation for low pressure. Therefore, in this work, we developed the extended DRS by introducing an effective electric field. To compare the extended DRS with the previous method, two-dimensional fluid simulations for inductively coupled plasma discharges were performed. This work was supported by the Industrial Strategic Technology Development Program (10041637, Development of Dry Etch System for 10 nm class SADP Process) funded by the Ministry of Knowledge Economy (MKE, Korea).

  6. ESCAP mobile training scheme.

    PubMed

    Yasas, F M

    1977-01-01

    In response to a United Nations resolution, the Mobile Training Scheme (MTS) was set up to provide training to the trainers of national cadres engaged in frontline and supervisory tasks in social welfare and rural development. The training is innovative in its being based on an analysis of field realities. The MTS team consisted of a leader, an expert on teaching methods and materials, and an expert on action research and evaluation. The country's trainers from different departments were sent to villages to work for a short period and to report their problems in fulfilling their roles. From these grass roots experiences, they made an analysis of the job, determining what knowledge, attitude and skills it required. Analysis of daily incidents and problems were used to produce indigenous teaching materials drawn from actual field practice. How to consider the problems encountered through government structures for policy making and decisions was also learned. Tasks of the students were to identify the skills needed for role performance by job analysis, daily diaries and project histories; to analyze the particular community by village profiles; to produce indigenous teaching materials; and to practice the role skills by actual role performance. The MTS scheme was tried in Nepal in 1974-75; 3 training programs trained 25 trainers and 51 frontline workers; indigenous teaching materials were created; technical papers written; and consultations were provided. In Afghanistan the scheme was used in 1975-76; 45 participants completed the training; seminars were held; and an ongoing Council was created. It is hoped that the training program will be expanded to other countries. PMID:12265562

  7. New high order schemes in BATS-R-US

    NASA Astrophysics Data System (ADS)

    Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.

    2013-12-01

    The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.

  8. Using Steffe's Advanced Fraction Schemes

    ERIC Educational Resources Information Center

    McCloskey, Andrea V.; Norton, Anderson H.

    2009-01-01

    Recognizing schemes, which are different from strategies, can help teachers understand their students' thinking about fractions. Using Steffe's advanced fraction schemes, the authors describe a progression of development that upper elementary and middle school students might follow in understanding fractions. Each scheme can be viewed as a…

  9. Stability of mixed time integration schemes for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Lin, J. I.

    1982-01-01

    A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.

  10. Positivity-preserving numerical schemes for multidimensional advection

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Macvean, M. K.; Lock, A. P.

    1993-01-01

    This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.

  11. Recent developments in shock-capturing schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1991-01-01

    The development of the shock capturing methodology is reviewed, paying special attention to the increasing nonlinearity in its design and its relation to interpolation. It is well-known that higher-order approximations to a discontinuous function generate spurious oscillations near the discontinuity (Gibbs phenomenon). Unlike standard finite-difference methods which use a fixed stencil, modern shock capturing schemes use an adaptive stencil which is selected according to the local smoothness of the solution. Near discontinuities this technique automatically switches to one-sided approximations, thus avoiding the use of discontinuous data which brings about spurious oscillations.

  12. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  13. Implicit approximate-factorization schemes for the low-frequency transonic equation

    NASA Technical Reports Server (NTRS)

    Ballhaus, W. F.; Steger, J. L.

    1975-01-01

    Two- and three-level implicit finite-difference algorithms for the low-frequency transonic small disturbance-equation are constructed using approximate factorization techniques. The schemes are unconditionally stable for the model linear problem. For nonlinear mixed flows, the schemes maintain stability by the use of conservatively switched difference operators for which stability is maintained only if shock propagation is restricted to be less than one spatial grid point per time step. The shock-capturing properties of the schemes were studied for various shock motions that might be encountered in problems of engineering interest. Computed results for a model airfoil problem that produces a flow field similar to that about a helicopter rotor in forward flight show the development of a shock wave and its subsequent propagation upstream off the front of the airfoil.

  14. A practical numerical scheme for the ternary Cahn-Hilliard system with a logarithmic free energy

    NASA Astrophysics Data System (ADS)

    Jeong, Darae; Kim, Junseok

    2016-01-01

    We consider a practically stable finite difference method for the ternary Cahn-Hilliard system with a logarithmic free energy modeling the phase separation of a three-component mixture. The numerical scheme is based on a linear unconditionally gradient stable scheme by Eyre and is solved by an efficient and accurate multigrid method. The logarithmic function has a singularity at zero. To remove the singularity, we regularize the function near zero by using a quadratic polynomial approximation. We perform a convergence test, a linear stability analysis, and a robustness test of the ternary Cahn-Hilliard equation. We observe that our numerical solutions are convergent, consistent with the exact solutions of linear stability analysis, and stable with practically large enough time steps. Using the proposed numerical scheme, we also study the temporal evolution of morphology patterns during phase separation in one-, two-, and three-dimensional spaces.

  15. Decentralized digital adaptive control of robot motion

    NASA Technical Reports Server (NTRS)

    Tarokh, M.

    1990-01-01

    A decentralized model reference adaptive scheme is developed for digital control of robot manipulators. The adaptation laws are derived using hyperstability theory, which guarantees asymptotic trajectory tracking despite gross robot parameter variations. The control scheme has a decentralized structure in the sense that each local controller receives only its joint angle measurement to produce its joint torque. The independent joint controllers have simple structures and can be programmed using a very simple and computationally fast algorithm. As a result, the scheme is suitable for real-time motion control.

  16. SEAWAT 2000: modelling unstable flow and sensitivity to discretization levels and numerical schemes

    NASA Astrophysics Data System (ADS)

    Al-Maktoumi, A.; Lockington, D. A.; Volker, R. E.

    2007-09-01

    A systematic analysis shows how results from the finite difference code SEAWAT are sensitive to choice of grid dimension, time step, and numerical scheme for unstable flow problems. Guidelines to assist in selecting appropriate combinations of these factors are suggested. While the SEAWAT code has been tested for a wide range of problems, the sensitivity of results to spatial and temporal discretization levels and numerical schemes has not been studied in detail for unstable flow problems. Here, the Elder-Voss-Souza benchmark problem has been used to systematically explore the sensitivity of SEAWAT output to spatio-temporal resolution and numerical solver choice. A grid size of 0.38 and 0.60% of the total domain length and depth respectively is found to be fine enough to deliver results with acceptable accuracy for most of the numerical schemes when Courant number (Cr) is 0.1. All numerical solvers produced similar results for extremely fine meshes; however, some schemes converged faster than others. For instance, the 3rd-order total variation-diminishing method (TVD3) scheme converged at a much coarser mesh than the standard finite difference methods (SFDM) upstream weighting (UW) scheme. The sensitivity of the results to Cr number depends on the numerical scheme as expected.

  17. A practical scheme for adaptive aircraft flight control systems

    NASA Technical Reports Server (NTRS)

    Athans, M.; Willner, D.

    1974-01-01

    A flight control system design is presented, that can be implemented by analog hardware, to be used to control an aircraft with uncertain parameters. The design is based upon the use of modern control theory. The ideas are illustrated by considering control of STOL longitudinal dynamics.

  18. An improved SPH scheme for cosmological simulations

    NASA Astrophysics Data System (ADS)

    Beck, A. M.; Murante, G.; Arth, A.; Remus, R.-S.; Teklu, A. F.; Donnert, J. M. F.; Planelles, S.; Beck, M. C.; Förster, P.; Imgrund, M.; Dolag, K.; Borgani, S.

    2016-01-01

    We present an implementation of smoothed particle hydrodynamics (SPH) with improved accuracy for simulations of galaxies and the large-scale structure. In particular, we implement and test a vast majority of SPH improvement in the developer version of GADGET-3. We use the Wendland kernel functions, a particle wake-up time-step limiting mechanism and a time-dependent scheme for artificial viscosity including high-order gradient computation and shear flow limiter. Additionally, we include a novel prescription for time-dependent artificial conduction, which corrects for gravitationally induced pressure gradients and improves the SPH performance in capturing the development of gas-dynamical instabilities. We extensively test our new implementation in a wide range of hydrodynamical standard tests including weak and strong shocks as well as shear flows, turbulent spectra, gas mixing, hydrostatic equilibria and self-gravitating gas clouds. We jointly employ all modifications; however, when necessary we study the performance of individual code modules. We approximate hydrodynamical states more accurately and with significantly less noise than standard GADGET-SPH. Furthermore, the new implementation promotes the mixing of entropy between different fluid phases, also within cosmological simulations. Finally, we study the performance of the hydrodynamical solver in the context of radiative galaxy formation and non-radiative galaxy cluster formation. We find galactic discs to be colder and more extended and galaxy clusters showing entropy cores instead of steadily declining entropy profiles. In summary, we demonstrate that our improved SPH implementation overcomes most of the undesirable limitations of standard GADGET-SPH, thus becoming the core of an efficient code for large cosmological simulations.

  19. Discrete unified gas kinetic scheme for all Knudsen number flows. II. Thermal compressible case.

    PubMed

    Guo, Zhaoli; Wang, Ruijie; Xu, Kun

    2015-03-01

    This paper is a continuation of our work on the development of multiscale numerical scheme from low-speed isothermal flow to compressible flows at high Mach numbers. In our earlier work [Z. L. Guo et al., Phys. Rev. E 88, 033305 (2013)], a discrete unified gas kinetic scheme (DUGKS) was developed for low-speed flows in which the Mach number is small so that the flow is nearly incompressible. In the current work, we extend the scheme to compressible flows with the inclusion of thermal effect and shock discontinuity based on the gas kinetic Shakhov model. This method is an explicit finite-volume scheme with the coupling of particle transport and collision in the flux evaluation at a cell interface. As a result, the time step of the method is not limited by the particle collision time. With the variation of the ratio between the time step and particle collision time, the scheme is an asymptotic preserving (AP) method, where both the Chapman-Enskog expansion for the Navier-Stokes solution in the continuum regime and the free transport mechanism in the rarefied limit can be precisely recovered with a second-order accuracy in both space and time. The DUGKS is an idealized multiscale method for all Knudsen number flow simulations. A number of numerical tests, including the shock structure problem, the Sod tube problem in a whole range of degree of rarefaction, and the two-dimensional Riemann problem in both continuum and rarefied regimes, are performed to validate the scheme. Comparisons with the results of direct simulation Monte Carlo (DSMC) and other benchmark data demonstrate that the DUGKS is a reliable and efficient method for multiscale flow problems. PMID:25871252

  20. Discrete unified gas kinetic scheme for all Knudsen number flows. II. Thermal compressible case

    NASA Astrophysics Data System (ADS)

    Guo, Zhaoli; Wang, Ruijie; Xu, Kun

    2015-03-01

    This paper is a continuation of our work on the development of multiscale numerical scheme from low-speed isothermal flow to compressible flows at high Mach numbers. In our earlier work [Z. L. Guo et al., Phys. Rev. E 88, 033305 (2013), 10.1103/PhysRevE.88.033305], a discrete unified gas kinetic scheme (DUGKS) was developed for low-speed flows in which the Mach number is small so that the flow is nearly incompressible. In the current work, we extend the scheme to compressible flows with the inclusion of thermal effect and shock discontinuity based on the gas kinetic Shakhov model. This method is an explicit finite-volume scheme with the coupling of particle transport and collision in the flux evaluation at a cell interface. As a result, the time step of the method is not limited by the particle collision time. With the variation of the ratio between the time step and particle collision time, the scheme is an asymptotic preserving (AP) method, where both the Chapman-Enskog expansion for the Navier-Stokes solution in the continuum regime and the free transport mechanism in the rarefied limit can be precisely recovered with a second-order accuracy in both space and time. The DUGKS is an idealized multiscale method for all Knudsen number flow simulations. A number of numerical tests, including the shock structure problem, the Sod tube problem in a whole range of degree of rarefaction, and the two-dimensional Riemann problem in both continuum and rarefied regimes, are performed to validate the scheme. Comparisons with the results of direct simulation Monte Carlo (DSMC) and other benchmark data demonstrate that the DUGKS is a reliable and efficient method for multiscale flow problems.

  1. Fast transport simulation with an adaptive grid refinement.

    PubMed

    Haefner, Frieder; Boy, Siegrun

    2003-01-01

    One of the main difficulties in transport modeling and calibration is the extraordinarily long computing times necessary for simulation runs. Improved execution time is a prerequisite for calibration in transport modeling. In this paper we investigate the problem of code acceleration using an adaptive grid refinement, neglecting subdomains, and devising a method by which the Courant condition can be ignored while maintaining accurate solutions. Grid refinement is based on dividing selected cells into regular subcells and including the balance equations of subcells in the equation system. The connection of coarse and refined cells satisfies the mass balance with an interpolation scheme that is implicitly included in the equation system. The refined subdomain can move with the average transport velocity of the subdomain. Very small time steps are required on a fine or a refined grid, because of the combined effect of the Courant and Peclet conditions. Therefore, we have developed a special upwind technique in small grid cells with high velocities (velocity suppression). We have neglected grid subdomains with very small concentration gradients (zero suppression). The resulting software, MODCALIF, is a three-dimensional, modularly constructed FORTRAN code. For convenience, the package names used by the well-known MODFLOW and MT3D computer programs are adopted, and the same input file structure and format is used, but the program presented here is separate and independent. Also, MODCALIF includes algorithms for variable density modeling and model calibration. The method is tested by comparison with an analytical solution, and illustrated by means of a two-dimensional theoretical example and three-dimensional simulations of the variable-density Cape Cod and SALTPOOL experiments. Crossing from fine to coarse grid produces numerical dispersion when the whole subdomain of interest is refined; however, we show that accurate solutions can be obtained using a fraction of the

  2. Direct adaptive control of manipulators in Cartesian space

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.

  3. Adaptive entropy coded subband coding of images.

    PubMed

    Kim, Y H; Modestino, J W

    1992-01-01

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138

  4. Adaptive control of robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The author presents a novel approach to adaptive control of manipulators to achieve trajectory tracking by the joint angles. The central concept in this approach is the utilization of the manipulator inverse as a feedforward controller. The desired trajectory is applied as an input to the feedforward controller which behaves as the inverse of the manipulator at any operating point; the controller output is used as the driving torque for the manipulator. The controller gains are then updated by an adaptation algorithm derived from MRAC (model reference adaptive control) theory to cope with variations in the manipulator inverse due to changes of the operating point. An adaptive feedback controller and an auxiliary signal are also used to enhance closed-loop stability and to achieve faster adaptation. The proposed control scheme is computationally fast and does not require a priori knowledge of the complex dynamic model or the parameter values of the manipulator or the payload.

  5. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  6. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  7. Comment on 'Shang S. 2012. Calculating actual crop evapotranspiration under soil water stress conditions with appropriate numerical methods and time step. Hydrological Processes 26: 3338-3343. DOI: 10.1002/hyp.8405'

    NASA Technical Reports Server (NTRS)

    Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James

    2014-01-01

    A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.

  8. An adaptive replacement algorithm for paged-memory computer systems.

    NASA Technical Reports Server (NTRS)

    Thorington, J. M., Jr.; Irwin, J. D.

    1972-01-01

    A general class of adaptive replacement schemes for use in paged memories is developed. One such algorithm, called SIM, is simulated using a probability model that generates memory traces, and the results of the simulation of this adaptive scheme are compared with those obtained using the best nonlookahead algorithms. A technique for implementing this type of adaptive replacement algorithm with state of the art digital hardware is also presented.

  9. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    SciTech Connect

    Philip, Bobby; Wang, Zhen; Berrill, Mark A; Rodriguez Rodriguez, Manuel; Pernice, Michael

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  10. Adaptive autofocusing: a closed-loop perspective.

    PubMed

    Zhang, Ying; Wen, Changyun; Soh, Yeng Chai; Fong, Aik Meng

    2005-04-01

    We present an adaptive autofocusing scheme. In this scheme, the focus measure is updated with focus tuning. To achieve this, we construct the focus measure by using image moments and develop an adaptive focus-tuning strategy to estimate the measure in closed loop. It is shown that the adaptive updating of the focus measure enables us to overcome the dependence of autofocusing on the image contents. Such an adaptive closed-loop focusing operation also effectively suppresses both the effect of the noise in optical imaging and the effect of time delay due to image processing time. Therefore a high accuracy of autofocusing is guaranteed. The effectiveness of the proposed scheme is demonstrated by simulations and experiments. PMID:15839269

  11. A mass-conserving advection scheme for offline simulation of scalar transport in coastal ocean models

    NASA Astrophysics Data System (ADS)

    Gillibrand, P. A.; Herzfeld, M.

    2016-05-01

    We present a flux-form semi-Lagrangian (FFSL) advection scheme designed for offline scalar transport simulation with coastal ocean models using curvilinear horizontal coordinates. The scheme conserves mass, overcoming problems of mass conservation typically experienced with offline transport models, and permits long time steps (relative to the Courant number) to be used by the offline model. These attributes make the method attractive for offline simulation of tracers in biogeochemical or sediment transport models using archived flow fields from hydrodynamic models. We describe the FFSL scheme, and test it on two idealised domains and one real domain, the Great Barrier Reef in Australia. For comparison, we also include simulations using a traditional semi-Lagrangian advection scheme for the offline simulations. We compare tracer distributions predicted by the offline FFSL transport scheme with those predicted by the original hydrodynamic model, assess the conservation of mass in all cases and contrast the computational efficiency of the schemes. We find that the FFSL scheme produced very good agreement with the distributions of tracer predicted by the hydrodynamic model, and conserved mass with an error of a fraction of one percent. In terms of computational speed, the FFSL scheme was comparable with the semi-Lagrangian method and an order of magnitude faster than the full hydrodynamic model, even when the latter ran in parallel on multiple cores. The FFSL scheme presented here therefore offers a viable mass-conserving and computationally-efficient alternative to traditional semi-Lagrangian schemes for offline scalar transport simulation in coastal models.

  12. Solving Chemical Master Equations by an Adaptive Wavelet Method

    SciTech Connect

    Jahnke, Tobias; Galan, Steffen

    2008-09-01

    Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.

  13. High resolution Godunov-type schemes with small stencils

    NASA Astrophysics Data System (ADS)

    Guinot, Vincent

    2004-04-01

    Higher-order Godunov-type schemes have to cope with the following two problems: (i) the increase in the size of the stencil that make the scheme computationally expensive, and (ii) the monotony-preserving treatments (limiters) that must be implemented to avoid oscillations, leading to strong damping of the solution, in particular linear waves (e.g. acoustic waves). When too compressive, limiting procedures may also trigger the instability of oscillatory numerical solutions (e.g. in advection-dispersion phenomena) via the artificial amplification of the shorter modes. The present paper proposes a new approach to carry out the reconstruction. In this approach, the values of the flow variable at the edges of the computational cells are obtained directly from the reconstruction within these cells. This method is applied to the MUSCL and DPM schemes for the solution of the linear advection equation. The modified DPM scheme can capture contact discontinuities within one computational cell, even after millions of time steps at Courant numbers ranging from 1 to values as low as 10-4. Linear waves are subject to negligible damping. Application of the method to the DPM for one-dimensional advection-dispersion problems shows that the numerical instability of oscillatory solutions caused by the over compressive, original DPM limiter is eliminated. One- and two-dimensional shallow water simulations show an improvement over classical methods, in particular for two-dimensional problems with strongly distorted meshes. The quality of the computational solution in the two-dimensional case remains acceptable even for mesh aspect ratios x/y as large as 10. The method can be extend to the discretization of higher-order PDEs, allowing third-order space derivatives to be discretized using only two cells in space.

  14. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  15. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  16. An efficient parallel implementation of explicit multirate Runge–Kutta schemes for discontinuous Galerkin computations

    SciTech Connect

    Seny, Bruno Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François

    2014-01-01

    Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.

  17. Plotting and Scheming

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1 [figure removed for brevity, see original site] Figure 2 Click for larger view

    These two graphics are planning tools used by Mars Exploration Rover engineers to plot and scheme the perfect location to place the rock abrasion tool on the rock collection dubbed 'El Capitan' near Opportunity's landing site. 'El Capitan' is located within a larger outcrop nicknamed 'Opportunity Ledge.'

    The rover visualization team from NASA Ames Research Center, Moffett Field, Calif., initiated the graphics by putting two panoramic camera images of the 'El Capitan' area into their three-dimensional model. The rock abrasion tool team from Honeybee Robotics then used the visualization tool to help target and orient their instrument on the safest and most scientifically interesting locations. The blue circle represents one of two current targets of interest, chosen because of its size, lack of dust, and most of all its distinct and intriguing geologic features. To see the second target location, see the image titled 'Plotting and Scheming.'

    The rock abrasion tool is sensitive to the shape and texture of a rock, and must safely sit within the 'footprint' indicated by the blue circles. The rock area must be large enough to fit the contact sensor and grounding mechanism within the area of the outer blue circle, and the rock must be smooth enough to get an even grind within the abrasion area of the inner blue circle. If the rock abrasion tool were not grounded by its support mechanism or if the surface were uneven, it could 'run away' from its target. The rock abrasion tool is location on the rover's instrument deployment device, or arm.

    Over the next few martian days, or sols, the rover team will use these and newer, similar graphics created with more recent, higher-resolution panoramic camera images and super-spectral data from the miniature thermal emission spectrometer. These data will be used to pick the best

  18. Two hybrid ARQ error control schemes for near earth satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1986-01-01

    Two hybrid automatic repeat request (ARQ) error control schemes are proposed for NASA near earth satellite communications. Both schemes are adaptive in nature, and employ cascaded codes to achieve both high reliability and throughput efficiency for high data rate file transfer.

  19. Two hybrid ARQ error control schemes for near Earth satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    Two hybrid Automatic Repeat Request (ARQ) error control schemes are proposed for NASA near Earth satellite communications. Both schemes are adaptive in nature, and employ cascaded codes to achieve both high reliability and throughput efficiency for high data rate file transfer.

  20. A Neuro-genetic Control Scheme Application for Industrial R 3 Workspaces

    NASA Astrophysics Data System (ADS)

    Irigoyen, E.; Larrea, M.; Valera, J.; Gómez, V.; Artaza, F.

    This work presents a neuro-genetic control scheme for a R 3 workspace application. The solution is based on a Multi Objective Genetic Algorithm reference generator and an Adaptive Predictive Neural Network Controller. Crane position control is presented as an application of the proposed control scheme.

  1. Hampshire Probation Sports Counselling Scheme.

    ERIC Educational Resources Information Center

    Waldman, Keith

    A sports counseling scheme for young people on criminal probation in Hampshire (England) was developed in the 1980s as a partnership between the Sports Council and the Probation Service. The scheme aims to encourage offenders, aged 14 and up, to make constructive use of their leisure time; to allow participants the opportunity to have positive…

  2. Test Information Targeting Strategies for Adaptive Multistage Testing Designs.

    ERIC Educational Resources Information Center

    Luecht, Richard M.; Burgin, William

    Adaptive multistage testlet (MST) designs appear to be gaining popularity for many large-scale computer-based testing programs. These adaptive MST designs use a modularized configuration of preconstructed testlets and embedded score-routing schemes to prepackage different forms of an adaptive test. The conditional information targeting (CIT)…

  3. A Parallel Implicit Adaptive Mesh Refinement Algorithm for Predicting Unsteady Fully-Compressible Reactive Flows

    NASA Astrophysics Data System (ADS)

    Northrup, Scott A.

    A new parallel implicit adaptive mesh refinement (AMR) algorithm is developed for the prediction of unsteady behaviour of laminar flames. The scheme is applied to the solution of the system of partial-differential equations governing time-dependent, two- and three-dimensional, compressible laminar flows for reactive thermally perfect gaseous mixtures. A high-resolution finite-volume spatial discretization procedure is used to solve the conservation form of these equations on body-fitted multi-block hexahedral meshes. A local preconditioning technique is used to remove numerical stiffness and maintain solution accuracy for low-Mach-number, nearly incompressible flows. A flexible block-based octree data structure has been developed and is used to facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. The data structure also enables an efficient and scalable parallel implementation via domain decomposition. The parallel implicit formulation makes use of a dual-time-stepping like approach with an implicit second-order backward discretization of the physical time, in which a Jacobian-free inexact Newton method with a preconditioned generalized minimal residual (GMRES) algorithm is used to solve the system of nonlinear algebraic equations arising from the temporal and spatial discretization procedures. An additive Schwarz global preconditioner is used in conjunction with block incomplete LU type local preconditioners for each sub-domain. The Schwarz preconditioning and block-based data structure readily allow efficient and scalable parallel implementations of the implicit AMR approach on distributed-memory multi-processor architectures. The scheme was applied to solutions of steady and unsteady laminar diffusion and premixed methane-air combustion and was found to accurately predict key flame characteristics. For a premixed flame under terrestrial gravity, the scheme accurately predicted the frequency of the natural

  4. Importance biasing scheme implemented in the PRIZMA code

    SciTech Connect

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-12-31

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.

  5. Application of a monotonic upstream-biased transport scheme to three-dimensional constituent transport calculations

    NASA Technical Reports Server (NTRS)

    Allen, Dale J.; Douglass, Anne R.; Rood, Richard B.; Guthrie, Paul D.

    1991-01-01

    The application of van Leer's scheme, a monotonic, upstream-biased differencing scheme, to three-dimensional constituent transport calculations is shown. The major disadvantage of the scheme is shown to be a self-limiting diffusion. A major advantage of the scheme is shown to be its ability to maintain constituent correlations. The scheme is adapted for a spherical coordinate system with a hybrid sigma-pressure coordinate in the vertical. Special consideration is given to cross-polar flow. The vertical wind calculation is shown to be extremely sensitive to the method of calculating the divergence. This sensitivity implies that a vertical wind formulation consistent with the transport scheme is essential for accurate transport calculations. The computational savings of the time-splitting method used to solve this equation are shown. Finally, the capabilities of this scheme are illustrated by an ozone transport and chemistry model simulation.

  6. Adaptive Force Control For Compliant Motion Of A Robot

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1995-01-01

    Two adaptive control schemes offer robust solutions to problem of stable control of forces of contact between robotic manipulator and objects in its environment. They are called "adaptive admittance control" and "adaptive compliance control." Both schemes involve use of force-and torque sensors that indicate contact forces. These schemes performed well when tested in computational simulations in which they were used to control seven-degree-of-freedom robot arm in executing contact tasks. Choice between admittance or compliance control is dictated by requirements of the application at hand.

  7. A decoupled energy stable scheme for a hydrodynamic phase-field model of mixtures of nematic liquid crystals and viscous fluids

    NASA Astrophysics Data System (ADS)

    Zhao, Jia; Yang, Xiaofeng; Shen, Jie; Wang, Qi

    2016-01-01

    We develop a linear, first-order, decoupled, energy-stable scheme for a binary hydrodynamic phase field model of mixtures of nematic liquid crystals and viscous fluids that satisfies an energy dissipation law. We show that the semi-discrete scheme in time satisfies an analogous, semi-discrete energy-dissipation law for any time-step and is therefore unconditionally stable. We then discretize the spatial operators in the scheme by a finite-difference method and implement the fully discrete scheme in a simplified version using CUDA on GPUs in 3 dimensions in space and time. Two numerical examples for rupture of nematic liquid crystal filaments immersed in a viscous fluid matrix are given, illustrating the effectiveness of this new scheme in resolving complex interfacial phenomena in free surface flows of nematic liquid crystals.

  8. A note on the leap-frog scheme in two and three dimensions. [finite difference method for partial differential equations

    NASA Technical Reports Server (NTRS)

    Abarbanel, S.; Gottlieb, D.

    1976-01-01

    The paper considers the leap-frog finite-difference method (Kreiss and Oliger, 1973) for systems of partial differential equations of the form du/dt = dF/dx + dG/dy + dH/dz, where d denotes partial derivative, u is a q-component vector and a function of x, y, z, and t, and the vectors F, G, and H are functions of u only. The original leap-frog algorithm is shown to admit a modification that improves on the stability conditions for two and three dimensions by factors of 2 and 2.8, respectively, thereby permitting larger time steps. The scheme for three dimensions is considered optimal in the sense that it combines simple averaging and large time steps.

  9. An Elasticity-Based Mesh Scheme Applied to the Computation of Unsteady Three-Dimensional Spoiler and Aeroelastic Problems

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    1999-01-01

    This paper presents a modification of the spring analogy scheme which uses axial linear spring stiffness with selective spring stiffening/relaxation. An alternate approach to solving the geometric conservation law is taken which eliminates the need for storage of metric Jacobians at previous time steps. Efficiency and verification are illustrated with several unsteady 2-D airfoil Euler computations. The method is next applied to the computation of the turbulent flow about a 2-D airfoil and wing with two and three- dimensional moving spoiler surfaces, and the results compared with Benchmark Active Controls Technology (BACT) experimental data. The aeroelastic response at low dynamic pressure of an airfoil to a single large scale oscillation of a spoiler surface is computed. This study confirms that it is possible to achieve accurate solutions with a very large time step for aeroelastic problems using the fluid solver and aeroelastic integrator as discussed in this paper.

  10. A New Approach for Imposing Artificial Viscosity for Explicit Discontinuous Galerkin Scheme

    NASA Astrophysics Data System (ADS)

    See, Yee Chee; Lv, Yu; Ihme, Matthias

    2014-11-01

    The development of high-order numerical methods for unstructured meshes has been a significant area of research, and the discontinuous Galerkin (DG) method has found considerable interest. However, the DG-method exhibits robustness issues in application to flows with discontinuities and shocks. To address this issue, an artificial viscosity method was proposed by Persson et al. for steady flows. Its extension to time-dependent flows introduces substantial time-step restrictions. By addressing this issue, a novel method, which is based on an entropy formulation, is proposed. The resulting scheme doesn't impose restrictions on the CFL-constraint. Following a description of the formulation and the evaluation of the stability, this newly developed artificial viscosity scheme is demonstrated in application to different test cases.

  11. Novel driving scheme for FLCD

    NASA Astrophysics Data System (ADS)

    Wu, Jiin-chuan; Wang, Chi-Chang

    1996-03-01

    A frame change data driving scheme (FCDDS) for ferroelectric LCD(FLCD) of matrix- addressing is developed which uses only positive voltages for the row and column waveforms to achieve bipolar driving waveforms on the FLCD pixels. Thus the required supply voltage for the driver chips is half that of the conventional driving scheme. Each scan line is addressed in only twice the switching time (tau) (minimum response time of FLC) so that this scheme is suitable for high duty ratio panels. In order to meet this bistable electro-optic effect of FLCD and zero net dc voltage across each pixel of the liquid crystal, turning on and turning off pixels are done at different time slots and frame slots. This driving scheme can be easily implemented using commercially available STN LCD drivers plus a small external circuit or by making an ASIC which is a slight modification of the STN driver. Both methods are discussed.

  12. Adaptive clinical trial designs in oncology

    PubMed Central

    Zang, Yong; Lee, J. Jack

    2015-01-01

    Adaptive designs have become popular in clinical trial and drug development. Unlike traditional trial designs, adaptive designs use accumulating data to modify the ongoing trial without undermining the integrity and validity of the trial. As a result, adaptive designs provide a flexible and effective way to conduct clinical trials. The designs have potential advantages of improving the study power, reducing sample size and total cost, treating more patients with more effective treatments, identifying efficacious drugs for specific subgroups of patients based on their biomarker profiles, and shortening the time for drug development. In this article, we review adaptive designs commonly used in clinical trials and investigate several aspects of the designs, including the dose-finding scheme, interim analysis, adaptive randomization, biomarker-guided randomization, and seamless designs. For illustration, we provide examples of real trials conducted with adaptive designs. We also discuss practical issues from the perspective of using adaptive designs in oncology trials. PMID:25811018

  13. On the marginal stability of upwind schemes

    NASA Astrophysics Data System (ADS)

    Gressier, J.; Moschetta, J.-M.

    Following Quirk's analysis of Roe's scheme, general criteria are derived to predict the odd-even decoupling. This analysis is applied to Roe's scheme, EFM Pullin's scheme, EIM Macrossan's scheme and AUSM Liou's scheme. Strict stability is shown to be desirable to avoid most of these flaws. Finally, the link between marginal stability and accuracy on shear waves is established.

  14. Relaxation schemes for Chebyshev spectral multigrid methods

    NASA Technical Reports Server (NTRS)

    Kang, Yimin; Fulton, Scott R.

    1993-01-01

    Two relaxation schemes for Chebyshev spectral multigrid methods are presented for elliptic equations with Dirichlet boundary conditions. The first scheme is a pointwise-preconditioned Richardson relaxation scheme and the second is a line relaxation scheme. The line relaxation scheme provides an efficient and relatively simple approach for solving two-dimensional spectral equations. Numerical examples and comparisons with other methods are given.

  15. The fundamentals of adaptive grid movement

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.

    1990-01-01

    Basic grid point movement schemes are studied. The schemes are referred to as adaptive grids. Weight functions and equidistribution in one dimension are treated. The specification of coefficients in the linear weight, attraction to a given grid or a curve, and evolutionary forces are considered. Curve by curve and finite volume methods are described. The temporal coupling of partial differential equations solvers and grid generators was discussed.

  16. Central Upwind Scheme for a Compressible Two-Phase Flow Model

    PubMed Central

    Ahmed, Munshoor; Saleem, M. Rehan; Zia, Saqib; Qamar, Shamsul

    2015-01-01

    In this article, a compressible two-phase reduced five-equation flow model is numerically investigated. The model is non-conservative and the governing equations consist of two equations describing the conservation of mass, one for overall momentum and one for total energy. The fifth equation is the energy equation for one of the two phases and it includes source term on the right-hand side which represents the energy exchange between two fluids in the form of mechanical and thermodynamical work. For the numerical approximation of the model a high resolution central upwind scheme is implemented. This is a non-oscillatory upwind biased finite volume scheme which does not require a Riemann solver at each time step. Few numerical case studies of two-phase flows are presented. For validation and comparison, the same model is also solved by using kinetic flux-vector splitting (KFVS) and staggered central schemes. It was found that central upwind scheme produces comparable results to the KFVS scheme. PMID:26039242

  17. Assessment of various convective parametrisation schemes for warm season precipitation foracasts

    NASA Astrophysics Data System (ADS)

    Mazarakis, Nikos; Kotroni, Vassiliki; Lagouvardos, Konstantinos; Argyriou, Athanassios

    2010-05-01

    In the frame of the EU/FP6-funded FLASH project the sensitivity of numerical model quantitative precipitation forecasts to the choice of the convective parameterization scheme (CPS) has been examined for twenty selected cases characterized by intense convective activity and widespread precipitation over Greece, during the warm period of 2005 - 2007. The schemes are: Kain - Fritsch, Grell and Betts - Miller - Janjic. The simulated precipitation from the 8-km grid was verified against raingauge measurements and lightning data provided by the ZEUS long-range lightning detection system. The validation against both sources of data showed that among the three CPSs, the more consistent behavior in quantitative precipitation forecasting was obtained by the Kain - Fritsch scheme that provided the best statistical scores. Further various modifications of the Kain-Fritsch (KF) have been examined. The modifications include: (a) the maximization of the convective scheme precipitation efficiency, (b) the change of the convective time step, (c) the force of the convective scheme to produce more/less cloud material, (d) the alteration of the vertical profile of updraft mass flux detrainment.

  18. A semi-implicit gas-kinetic scheme for smooth flows

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Guo, Zhaoli

    2016-08-01

    In this paper, a semi-implicit gas-kinetic scheme (SIGKS) is derived for smooth flows based on the Bhatnagar-Gross-Krook (BGK) equation. As a finite-volume scheme, the evolution of the average flow variables in a control volume is under the Eulerian framework, whereas the construction of the numerical flux across the cell interface comes from the Lagrangian perspective. The adoption of the Lagrangian aspect makes the collision and the transport mechanisms intrinsically coupled together in the flux evaluation. As a result, the time step size is independent of the particle collision time and solely determined by the Courant-Friedrichs-Lewy (CFL) condition. An analysis of the reconstructed distribution function at the cell interface shows that the SIGKS can be viewed as a modified Lax-Wendroff type scheme with an additional term. Furthermore, the addition term coming from the implicitness in the reconstruction is expected to be able to enhance the numerical stability of the scheme. A number of numerical tests of smooth flows with low and moderate Mach numbers are performed to benchmark the SIGKS. The results show that the method has second-order spatial accuracy, and can give accurate numerical solutions in comparison with benchmark results. It is also demonstrated that the numerical stability of the proposed scheme is better than the original GKS for smooth flows.

  19. Adaptive Square-Root Cubature-Quadrature Kalman Particle Filter for satellite attitude determination using vector observations

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Pourtakdoust, Seid H.

    2014-12-01

    A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.

  20. Impact of spatial and temporal aggregation of input parameters on the assessment of irrigation scheme performance

    NASA Astrophysics Data System (ADS)

    Lorite, I. J.; Mateos, L.; Fereres, E.

    2005-01-01

    SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results

  1. An adaptive alpha for the implicit Monte Carlo equations

    SciTech Connect

    Wollaber, Allan B

    2010-12-07

    During the derivation of Fleck and Cumming's Implicit Monte Carlo (IMC) equations, a global user parameter {alpha} is introduced that may be adjusted in the range 0.5 {<=} {alpha} {<=} 1.0 in order to control the degree of 'implicitness' of the IMC approximation of the thermal radiative transfer equations. For linear (and certain nonlinear) problems, it can be shown that the IMC equations are second-order accurate in the time step size {Delta}{sub t} if {alpha} = 0.5, and they are first-order accurate otherwise. However, users almost universally choose {alpha} = 1 in an attempt to avoid unphysical temperature oscillations that can occur for problem regions in which the optical time step is large. In this paper, we provide a mathematically motivated, adaptive value of {alpha} that dynamically changes according to the space- and time-dependent problem data. We show that our {alpha} {yields} 0.5 in the limit of small {Delta}{sub t}, which automatically produces second-order accuracy. In the limit of large time steps, {alpha} > 1; this retains the 'fully implicit' time behavior that is usually employed throughout the entire problem. An adaptive {alpha} also has the advantages of being trivial to implement in current IMC implementations and allowing the elimination of a user input parameter that is a potential source of confusion. Test problems are presented to demonstrate the accuracy of the new approach.

  2. Fully Threaded Tree for Adaptive Refinement Fluid Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Khokhlov, A. M.

    1997-01-01

    A fully threaded tree (FTT) for adaptive refinement of regular meshes is described. By using a tree threaded at all levels, tree traversals for finding nearest neighbors are avoided. All operations on a tree including tree modifications are O(N), where N is a number of cells, and are performed in parallel. An efficient implementation of the tree is described that requires 2N words of memory. A filtering algorithm for removing high frequency noise during mesh refinement is described. A FTT can be used in various numerical applications. In this paper, it is applied to the integration of the Euler equations of fluid dynamics. An adaptive mesh time stepping algorithm is described in which different time steps are used at different l evels of the tree. Time stepping and mesh refinement are interleaved to avoid extensive buffer layers of fine mesh which were otherwise required ahead of moving shocks. Test examples are presented, and the FTT performance is evaluated. The three dimensional simulation of the interaction of a shock wave and a spherical bubble is carried out that shows the development of azimuthal perturbations on the bubble surface.

  3. Convergence acceleration of implicit schemes in the presence of high aspect ratio grid cells

    NASA Technical Reports Server (NTRS)

    Buelow, B. E. O.; Venkateswaran, S.; Merkle, C. L.

    1993-01-01

    The performance of Navier-Stokes codes are influenced by several phenomena. For example, the robustness of the code may be compromised by the lack of grid resolution, by a need for more precise initial conditions or because all or part of the flowfield lies outside the flow regime in which the algorithm converges efficiently. A primary example of the latter effect is the presence of extended low Mach number and/or low Reynolds number regions which cause convergence deterioration of time marching algorithms. Recent research into this problem by several workers including the present authors has largely negated this difficulty through the introduction of time-derivative preconditioning. In the present paper, we employ the preconditioned algorithm to address convergence difficulties arising from sensitivity to grid stretching and high aspect ratio grid cells. Strong grid stretching is particularly characteristic of turbulent flow calculations where the grid must be refined very tightly in the dimension normal to the wall, without a similar refinement in the tangential direction. High aspect ratio grid cells also arise in problems that involve high aspect ratio domains such as combustor coolant channels. In both situations, the high aspect ratio cells can lead to extreme deterioration in convergence. It is the purpose of the present paper to address the reasons for this adverse response to grid stretching and to suggest methods for enhancing convergence under such circumstances. Numerical algorithms typically possess a maximum allowable or optimum value for the time step size, expressed in non-dimensional terms as a CFL number or vonNeumann number (VNN). In the presence of high aspect ratio cells, the smallest dimension of the grid cell controls the time step size causing it to be extremely small, which in turn results in the deterioration of convergence behavior. For explicit schemes, this time step limitation cannot be exceeded without violating stability restrictions

  4. Convergence acceleration of implicit schemes in the presence of high aspect ratio grid cells

    NASA Astrophysics Data System (ADS)

    Buelow, B. E. O.; Venkateswaran, S.; Merkle, C. L.

    1993-07-01

    The performance of Navier-Stokes codes are influenced by several phenomena. For example, the robustness of the code may be compromised by the lack of grid resolution, by a need for more precise initial conditions or because all or part of the flowfield lies outside the flow regime in which the algorithm converges efficiently. A primary example of the latter effect is the presence of extended low Mach number and/or low Reynolds number regions which cause convergence deterioration of time marching algorithms. Recent research into this problem by several workers including the present authors has largely negated this difficulty through the introduction of time-derivative preconditioning. In the present paper, we employ the preconditioned algorithm to address convergence difficulties arising from sensitivity to grid stretching and high aspect ratio grid cells. Strong grid stretching is particularly characteristic of turbulent flow calculations where the grid must be refined very tightly in the dimension normal to the wall, without a similar refinement in the tangential direction. High aspect ratio grid cells also arise in problems that involve high aspect ratio domains such as combustor coolant channels. In both situations, the high aspect ratio cells can lead to extreme deterioration in convergence. It is the purpose of the present paper to address the reasons for this adverse response to grid stretching and to suggest methods for enhancing convergence under such circumstances. Numerical algorithms typically possess a maximum allowable or optimum value for the time step size, expressed in non-dimensional terms as a CFL number or vonNeumann number (VNN). In the presence of high aspect ratio cells, the smallest dimension of the grid cell controls the time step size causing it to be extremely small, which in turn results in the deterioration of convergence behavior. For explicit schemes, this time step limitation cannot be exceeded without violating stability restrictions

  5. High resolution schemes for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Harten, A.

    1983-01-01

    A class of new explicit second order accurate finite difference schemes for the computation of weak solutions of hyperbolic conservation laws is presented. These highly nonlinear schemes are obtained by applying a nonoscillatory first order accurate scheme to an appropriately modified flux function. The so-derived second order accurate schemes achieve high resolution while preserving the robustness of the original nonoscillatory first order accurate scheme. Numerical experiments are presented to demonstrate the performance of these new schemes.

  6. The implementation of reverse Kessler warm rain scheme for radar reflectivity assimilation using a nudging approach in New Zealand

    NASA Astrophysics Data System (ADS)

    Zhang, Sijin; Austin, Geoff; Sutherland-Stacey, Luke

    2014-05-01

    Reverse Kessler warm rain processes were implemented within the Weather Research and Forecasting Model (WRF) and coupled with a Newtonian relaxation, or nudging technique designed to improve quantitative precipitation forecasting (QPF) in New Zealand by making use of observed radar reflectivity and modest computing facilities. One of the reasons for developing such a scheme, rather than using 4D-Var for example, is that radar VAR scheme in general, and 4D-Var in particular, requires computational resources beyond the capability of most university groups and indeed some national forecasting centres of small countries like New Zealand. The new scheme adjusts the model water vapor mixing ratio profiles based on observed reflectivity at each time step within an assimilation time window. The whole scheme can be divided into following steps: (i) The radar reflectivity is firstly converted to rain water, and (ii) then the rain water is used to derive cloud water content according to the reverse Kessler scheme; (iii) The cloud water content associated water vapor mixing ratio is then calculated based on the saturation adjustment processes; (iv) Finally the adjusted water vapor is nudged into the model and the model background is updated. 13 rainfall cases which occurred in the summer of 2011/2012 in New Zealand were used to evaluate the new scheme, different forecast scores were calculated and showed that the new scheme was able to improve precipitation forecasts on average up to around 7 hours ahead depending on different verification thresholds.

  7. A third-order compact gas-kinetic scheme on unstructured meshes for compressible Navier-Stokes solutions

    NASA Astrophysics Data System (ADS)

    Pan, Liang; Xu, Kun

    2016-08-01

    In this paper, for the first time a third-order compact gas-kinetic scheme is proposed on unstructured meshes for the compressible viscous flow computations. The possibility to design such a third-order compact scheme is due to the high-order gas evolution model, where a time-dependent gas distribution function at cell interface not only provides the fluxes across a cell interface, but also presents a time accurate solution for flow variables at cell interface. As a result, both cell averaged and cell interface flow variables can be used for the initial data reconstruction at the beginning of next time step. A weighted least-square procedure has been used for the initial reconstruction. Therefore, a compact third-order gas-kinetic scheme with the involvement of neighboring cells only can be developed on unstructured meshes. In comparison with other conventional high-order schemes, the current method avoids the Gaussian point integration for numerical fluxes along a cell interface and the multi-stage Runge-Kutta method for temporal accuracy. The third-order compact scheme is numerically stable under CFL condition CFL ≈ 0.5. Due to its multidimensional gas-kinetic formulation and the coupling of inviscid and viscous terms, even with unstructured meshes, the boundary layer solution and vortex structure can be accurately captured by the current scheme. At the same time, the compact scheme can capture strong shocks as well.

  8. Fourth-order compact schemes for the numerical simulation of coupled Burgers' equation

    NASA Astrophysics Data System (ADS)

    Bhatt, H. P.; Khaliq, A. Q. M.

    2016-03-01

    This paper introduces two new modified fourth-order exponential time differencing Runge-Kutta (ETDRK) schemes in combination with a global fourth-order compact finite difference scheme (in space) for direct integration of nonlinear coupled viscous Burgers' equations in their original form without using any transformations or linearization techniques. One scheme is a modification of the Cox and Matthews ETDRK4 scheme based on (1 , 3) -Padé approximation and other is a modification of Krogstad's ETDRK4-B scheme based on (2 , 2) -Padé approximation. Efficient versions of the proposed schemes are obtained by using a partial fraction splitting technique of rational functions. The stability properties of the proposed schemes are studied by plotting the stability regions, which provide an explanation of their behavior for dispersive and dissipative problems. The order of convergence of the schemes is examined empirically and found that the modification of ETDRK4 converges with the expected rate even if the initial data are nonsmooth. On the other hand, modification of ETDRK4-B suffers with order reduction if the initial data are nonsmooth. Several numerical experiments are carried out in order to demonstrate the performance and adaptability of the proposed schemes. The numerical results indicate that the proposed schemes provide better accuracy than other schemes available in the literature. Moreover, the results show that the modification of ETDRK4 is reliable and yields more accurate results than modification of ETDRK4-B, while solving problems with nonsmooth data or with high Reynolds number.

  9. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  10. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  11. Energy partitioning schemes: a dilemma.

    PubMed

    Mayer, I

    2007-01-01

    Two closely related energy partitioning schemes, in which the total energy is presented as a sum of atomic and diatomic contributions by using the "atomic decomposition of identity", are compared on the example of N,N-dimethylformamide, a simple but chemically rich molecule. Both schemes account for different intramolecular interactions, for instance they identify the weak C-H...O intramolecular interactions, but give completely different numbers. (The energy decomposition scheme based on the virial theorem is also considered.) The comparison of the two schemes resulted in a dilemma which is especially striking when these schemes are applied for molecules distorted from their equilibrium structures: one either gets numbers which are "on the chemical scale" and have quite appealing values at the equilibrium molecular geometries, but exhibiting a counter-intuitive distance dependence (the two-center energy components increase in absolute value with the increase of the interatomic distances)--or numbers with too large absolute values but "correct" distance behaviour. The problem is connected with the quick decay of the diatomic kinetic energy components. PMID:17328441

  12. An intelligent robotics control scheme

    NASA Technical Reports Server (NTRS)

    Orlando, N. E.

    1984-01-01

    The problem of robot control is viewed at the level of communicating high-level commands produced by intelligent algorithms to the actuator/sensor controllers. Four topics are considered in the design of an integrated control and communications scheme for an intelligent robotic system: the use of abstraction spaces, hierarchical versus heterarchical control, distributed processing, and the interleaving of the steps of plan creation and plan execution. A scheme is presented for an n-level distributed hierarchical/heterarchical control system that effectively interleaves intelligent planning, execution, and sensory feedback. A three-level version of this scheme has been successfully implemented in the Intelligent Systems Research Lab at NASA Langley Research Center. This implementation forms the control structure for DAISIE (Distributed Artificially Intelligent System for Interacting with the Environment), a testbed system integrating AI software with robotics hardware.

  13. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  14. Adaptive bidirectional associative memories.

    PubMed

    Kosko, B

    1987-12-01

    Bidirectionality, forward and backward information flow, is introduced in neural networks to produce two-way associative search for stored stimulus-response associations (A(i),B(i)). Two fields of neurons, F(A) and F(B), are connected by an n x p synaptic marix M. Passing information through M gives one direction, passing information through its transpose M(T) gives the other. Every matrix is bidirectionally stable for bivalent and for continuous neurons. Paired data (A(i),B(i)) are encoded in M by summing bipolar correlation matrices. The bidirectional associative memory (BAM) behaves as a two-layer hierarchy of symmetrically connected neurons. When the neurons in F(A) and F(B) are activated, the network quickly evolves to a stable state of twopattern reverberation, or pseudoadaptive resonance, for every connection topology M. The stable reverberation corresponds to a system energy local minimum. An adaptive BAM allows M to rapidly learn associations without supervision. Stable short-term memory reverberations across F(A) and F(B) gradually seep pattern information into the long-term memory connections M, allowing input associations (A(i),B(i)) to dig their own energy wells in the network state space. The BAM correlation encoding scheme is extended to a general Hebbian learning law. Then every BAM adaptively resonates in the sense that all nodes and edges quickly equilibrate in a system energy local minimum. A sampling adaptive BAM results when many more training samples are presented than there are neurons in F(B) and F(B), but presented for brief pulses of learning, not allowing learning to fully or nearly converge. Learning tends to improve with sample size. Sampling adaptive BAMs can learn some simple continuous mappings and can rapidly abstract bivalent associations from several noisy gray-scale samples. PMID:20523473

  15. A Data Gathering Scheme in Wireless Sensor Networks Based on Synchronization of Chaotic Spiking Oscillator Networks

    SciTech Connect

    Nakano, Hidehiro; Utani, Akihide; Miyauchi, Arata; Yamamoto, Hisao

    2011-04-19

    This paper studies chaos-based data gathering scheme in multiple sink wireless sensor networks. In the proposed scheme, each wireless sensor node has a simple chaotic oscillator. The oscillators generate spike signals with chaotic interspike intervals, and are impulsively coupled by the signals via wireless communication. Each wireless sensor node transmits and receives sensor information only in the timing of the couplings. The proposed scheme can exhibit various chaos synchronous phenomena and their breakdown phenomena, and can effectively gather sensor information with the significantly small number of transmissions and receptions compared with the conventional scheme. Also, the proposed scheme can flexibly adapt various wireless sensor networks not only with a single sink node but also with multiple sink nodes. This paper introduces our previous works. Through simulation experiments, we show effectiveness of the proposed scheme and discuss its development potential.

  16. An Underfrequency Load Shedding Scheme with Minimal Knowledge of System Parameters

    NASA Astrophysics Data System (ADS)

    Joe, Athbel; Krishna, S.

    2015-02-01

    Underfrequency load shedding (UFLS) is a common practice to protect a power system during large generation deficit. The adaptive UFLS schemes proposed in the literature have the drawbacks such as requirement of transmission of local frequency measurements to a central location and knowledge of system parameters, such as inertia constant H and load damping constant D. In this paper, a UFLS scheme that uses only the local frequency measurements is proposed. The proposed method does not require prior knowledge of H and D. The scheme is developed for power systems with and without spinning reserve. The proposed scheme requires frequency measurements free from the oscillations at the swing mode frequencies. Use of an elliptic low pass filter to remove these oscillations is proposed. The scheme is tested on a 2 generator system and the 10 generator New England system. Performance of the scheme with power system stabilizer is also studied.

  17. Adaptive Sampling in Hierarchical Simulation

    SciTech Connect

    Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R

    2007-07-09

    We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.

  18. Adaptive encoding in the visual pathway.

    PubMed

    Lesica, Nicholas A; Boloori, Alireza S; Stanley, Garrett B

    2003-02-01

    In a natural setting, the mean luminance and contrast of the light within a visual neuron's receptive field are constantly changing as the eyes saccade across complex scenes. Adaptive mechanisms modulate filtering properties of the early visual pathway in response to these variations, allowing the system to maintain differential sensitivity to nonstationary stimuli. An adaptive variant of the reverse correlation technique is used to characterize these changes during single trials. Properties of the adaptive reverse correlation algorithm were investigated via simulation. Analysis of data collected from the mammalian visual system demonstrates the ability to continuously track adaptive changes in the encoding scheme. The adaptive estimation approach provides a framework for characterizing the role of adaptation in natural scene viewing. PMID:12613554

  19. Studies of pressure-velocity coupling schemes for analysis of incompressible and compressible flows. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Burgreen, Gregory Wayne

    1987-01-01

    Two pressure-velocity coupling schemes, both of which solve the fully implicit discretized equations governing the flow of fluids were examined, and the capability of performing large Reynolds number, low Mach number compressible flow calculations were assessed. The semi-implicit iterative SIMPLE algorithm is extended to handle transient compressible flow calculations. This extension takes into account a strong coupling between the pressure and temperature through a correction procedure, based on the equations of state. Results obtained from the extended SIMPLE algorithm are then compared to similar results obtained from the non-iterative PISO algorithm. Both time-dependent and steady state calculations were performed using an axisymmetric 2:1 pipe expansion geometry and laminar flow conditions corresponding to Reynolds number of 1000 and Mach number of 2.0. For calculations simulating a time-dependent compression/expansion process, both schemes exhibit transient features in excellent agreement with each other, and moreover, the PISO method shows a significant computational time reduction of 60 percent over the SIMPLE scheme, regardless of the time step size or grid size employed. The effects of numerical diffusion are shown to be significant in these calculations. For steady state compressible flows, however, the SIMPLE algorithm displays increasing computational efficiency over the PISO method as the time step sizes employed to reach steady state conditions are decreased.

  20. Adapting Animals.

    ERIC Educational Resources Information Center

    Wedman, John; Wedman, Judy

    1985-01-01

    The "Animals" program found on the Apple II and IIe system master disk can be adapted for use in the mathematics classroom. Instructions for making the necessary changes and suggestions for using it in lessons related to geometric shapes are provided. (JN)

  1. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  2. Adaptive homeostasis.

    PubMed

    Davies, Kelvin J A

    2016-06-01

    Homeostasis is a central pillar of modern Physiology. The term homeostasis was invented by Walter Bradford Cannon in an attempt to extend and codify the principle of 'milieu intérieur,' or a constant interior bodily environment, that had previously been postulated by Claude Bernard. Clearly, 'milieu intérieur' and homeostasis have served us well for over a century. Nevertheless, research on signal transduction systems that regulate gene expression, or that cause biochemical alterations to existing enzymes, in response to external and internal stimuli, makes it clear that biological systems are continuously making short-term adaptations both to set-points, and to the range of 'normal' capacity. These transient adaptations typically occur in response to relatively mild changes in conditions, to programs of exercise training, or to sub-toxic, non-damaging levels of chemical agents; thus, the terms hormesis, heterostasis, and allostasis are not accurate descriptors. Therefore, an operational adjustment to our understanding of homeostasis suggests that the modified term, Adaptive Homeostasis, may be useful especially in studies of stress, toxicology, disease, and aging. Adaptive Homeostasis may be defined as follows: 'The transient expansion or contraction of the homeostatic range in response to exposure to sub-toxic, non-damaging, signaling molecules or events, or the removal or cessation of such molecules or events.' PMID:27112802

  3. An Advanced Leakage Scheme for Neutrino Treatment in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Perego, A.; Cabezón, R. M.; Käppeli, R.

    2016-04-01

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.

  4. An improved high-order scheme for DNS of low Mach number turbulent reacting flows based on stiff chemistry solver

    NASA Astrophysics Data System (ADS)

    Yu, Rixin; Yu, Jiangfei; Bai, Xue-Song

    2012-06-01

    We present an improved numerical scheme for numerical simulations of low Mach number turbulent reacting flows with detailed chemistry and transport. The method is based on a semi-implicit operator-splitting scheme with a stiff solver for integration of the chemical kinetic rates, developed by Knio et al. [O.M. Knio, H.N. Najm, P.S. Wyckoff, A semi-implicit numerical scheme for reacting flow II. Stiff, operator-split formulation, Journal of Computational Physics 154 (2) (1999) 428-467]. Using the material derivative form of continuity equation, we enhance the scheme to allow for large density ratio in the flow field. The scheme is developed for direct numerical simulation of turbulent reacting flow by employing high-order discretization for the spatial terms. The accuracy of the scheme in space and time is verified by examining the grid/time-step dependency on one-dimensional benchmark cases: a freely propagating premixed flame in an open environment and in an enclosure related to spark-ignition engines. The scheme is then examined in simulations of a two-dimensional laminar flame/vortex-pair interaction. Furthermore, we apply the scheme to direct numerical simulation of a homogeneous charge compression ignition (HCCI) process in an enclosure studied previously in the literature. Satisfactory agreement is found in terms of the overall ignition behavior, local reaction zone structures and statistical quantities. Finally, the scheme is used to study the development of intrinsic flame instabilities in a lean H2/air premixed flame, where it is shown that the spatial and temporary accuracies of numerical schemes can have great impact on the prediction of the sensitive nonlinear evolution process of flame instability.

  5. Fundamental Limitations in Advanced LC Schemes

    SciTech Connect

    Mikhailichenko, A. A.

    2010-11-04

    Fundamental limitations in acceleration gradient, emittance, alignment and polarization in acceleration schemes are considered in application for novel schemes of acceleration, including laser-plasma and structure-based schemes. Problems for each method are underlined whenever it is possible. Main attention is paid to the scheme with a tilted laser bunch.

  6. A scheme for symmetrization verification

    NASA Astrophysics Data System (ADS)

    Sancho, Pedro

    2011-08-01

    We propose a scheme for symmetrization verification in two-particle systems, based on one-particle detection and state determination. In contrast to previous proposals, it does not follow a Hong-Ou-Mandel-type approach. Moreover, the technique can be used to generate superposition states of single particles.

  7. Invisibly Sanitizable Digital Signature Scheme

    NASA Astrophysics Data System (ADS)

    Miyazaki, Kunihiko; Hanaoka, Goichiro; Imai, Hideki

    A digital signature does not allow any alteration of the document to which it is attached. Appropriate alteration of some signed documents, however, should be allowed because there are security requirements other than the integrity of the document. In the disclosure of official information, for example, sensitive information such as personal information or national secrets is masked when an official document is sanitized so that its nonsensitive information can be disclosed when it is requested by a citizen. If this disclosure is done digitally by using the current digital signature schemes, the citizen cannot verify the disclosed information because it has been altered to prevent the leakage of sensitive information. The confidentiality of official information is thus incompatible with the integrity of that information, and this is called the digital document sanitizing problem. Conventional solutions such as content extraction signatures and digitally signed document sanitizing schemes with disclosure condition control can either let the sanitizer assign disclosure conditions or hide the number of sanitized portions. The digitally signed document sanitizing scheme we propose here is based on the aggregate signature derived from bilinear maps and can do both. Moreover, the proposed scheme can sanitize a signed document invisibly, that is, no one can distinguish whether the signed document has been sanitized or not.

  8. Geophysical Inversion Through Hierarchical Scheme

    NASA Astrophysics Data System (ADS)

    Furman, A.; Huisman, J. A.

    2010-12-01

    Geophysical investigation is a powerful tool that allows non-invasive and non-destructive mapping of subsurface states and properties. However, non-uniqueness associated with the inversion process prevents the quantitative use of these methods. One major direction researchers are going is constraining the inverse problem by hydrological observations and models. An alternative to the commonly used direct inversion methods are global optimization schemes (such as genetic algorithms and Monte Carlo Markov Chain methods). However, the major limitation here is the desired high resolution of the tomographic image, which leads to a large number of parameters and an unreasonably high computational effort when using global optimization schemes. Two innovative schemes are presented here. First, a hierarchical approach is used to reduce the computational effort for the global optimization. Solution is achieved for coarse spatial resolution, and this solution is used as the starting point for finer scheme. We show that the computational effort is reduced in this way dramatically. Second, we use a direct ERT inversion as the starting point for global optimization. In this case preliminary results show that the outcome is not necessarily beneficial, probably because of spatial mismatch between the results of the direct inversion and the true resistivity field.

  9. Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations.

    PubMed

    Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul

    2015-01-01

    The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067

  10. Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations

    PubMed Central

    Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul

    2015-01-01

    The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067

  11. The spatial fourth-order energy-conserved S-FDTD scheme for Maxwell's equations

    NASA Astrophysics Data System (ADS)

    Liang, Dong; Yuan, Qiang

    2013-06-01

    In this paper we develop a new spatial fourth-order energy-conserved splitting finite-difference time-domain method for Maxwell's equations. Based on the staggered grids, the splitting technique is applied to lead to a three-stage energy-conserved splitting scheme. At each stage, using the spatial fourth-order difference operators on the strict interior nodes by a linear combination of two central differences, one with a spatial step and the other with three spatial steps, we first propose the spatial high-order near boundary differences on the near boundary nodes which ensure the scheme to preserve energy conservations and to have fourth-order accuracy in space step. The proposed scheme has the important properties: energy-conserved, unconditionally stable, non-dissipative, high-order accurate, and computationally efficient. We first prove that the scheme satisfies energy conversations and is in unconditional stability. We then prove the optimal error estimates of fourth-order in spatial step and second-order in time step for the electric and magnetic fields and obtain the convergence and error estimate of divergence-free as well. Numerical dispersion analysis and numerical experiments are presented to confirm our theoretical results.

  12. An efficient, unconditionally energy stable local discontinuous Galerkin scheme for the Cahn-Hilliard-Brinkman system

    NASA Astrophysics Data System (ADS)

    Guo, Ruihan; Xu, Yan

    2015-10-01

    In this paper, we present an efficient and unconditionally energy stable fully-discrete local discontinuous Galerkin (LDG) method for approximating the Cahn-Hilliard-Brinkman (CHB) system, which is comprised of a Cahn-Hilliard type equation and a generalized Brinkman equation modeling fluid flow. The semi-discrete energy stability of the LDG method is proved firstly. Due to the strict time step restriction (Δt = O (Δx4)) of explicit time discretization methods for stability, we introduce a semi-implicit scheme which consists of the implicit Euler method combined with a convex splitting of the discrete Cahn-Hilliard energy strategy for the temporal discretization. The unconditional energy stability of this fully-discrete convex splitting scheme is also proved. Obviously, the fully-discrete equations at the implicit time level are nonlinear, and to enhance the efficiency of the proposed approach, the nonlinear Full Approximation Scheme (FAS) multigrid method has been employed to solve this system of algebraic equations. We also show the nearly optimal complexity numerically. Numerical experiments based on the overall solution method of combining the proposed LDG method, convex splitting scheme and the nonlinear multigrid solver are given to validate the theoretical results and to show the effectiveness of the proposed approach for the CHB system.

  13. On symmetric and upwind TVD schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.

    1985-01-01

    A class of explicit and implicit total variation diminishing (TVD) schemes for the compressible Euler and Navier-Stokes equations was developed. They do not generate spurious oscillations across shocks and contact discontinuities. In general, shocks can be captured within 1 to 2 grid points. For the inviscid case, these schemes are divided into upwind TVD schemes and symmetric (nonupwind) TVD schemes. The upwind TVD scheme is based on the second-order TVD scheme. The symmetric TVD scheme is a generalization of Roe's and Davis' TVD Lax-Wendroff scheme. The performance of these schemes on some viscous and inviscid airfoil steady-state calculations is investigated. The symmetric and upwind TVD schemes are compared.

  14. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  15. Extension of Low Dissipative High Order Hydrodynamics Schemes for MHD Equations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The objective of this paper is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD (magnetohydrodynamic) equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls to limit the amount and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvi-linear grids. The three features of the present MHD scheme over existing schemes in the open literature are as follows. First, the scheme is constructed for long-time integrations of shock/turbulence/combustion magnetized flows. Available schemes are too diffusive for long-time integrations and/or turbulence/combustion problems. Second, unlike existing schemes for the conservative MHD equations which suffer from ill-conditioned eigen-decompositions, the present scheme makes use of a well-conditioned eigen-decomposition to solve the conservative form of the MHD equations. This is due to, partly. the fact that the divergence of the magnetic field condition is a different type of constraint from its incompressible Navier-Stokes cousin. Third, a new approach to minimize the numerical error of the divergence free magnetic condition for high order scheme is introduced.

  16. The Importance of Using Explicit and then Implicit Schemes in the Fast time-scale Rupturing at Oceanic-Continental Boundary

    NASA Astrophysics Data System (ADS)

    So, B.; Yuen, D. A.; Lee, S.

    2011-12-01

    Numerical modeling in geodynamics, such as subduction and lithospheric rupture, normally uses only one scheme (e.g., implicit or explicit). However every geodynamical phenomena has a multi-time-scale instability. So these problems cannot be solved completely by just one numerical scheme since implicit and explicit schemes have different characteristics and stabilize in different time step size. That's why modelers should select an appropriate scheme for their problems. In our fully coupled thermal-mechanical finite element modeling for asymmetric instability initiation induced by shear modulus contrast between oceanic and continental lithospheres, two simply attached lithospheres with different shear modulus but having same visco-plastic rheologies are compressed with constant velocity of few centimeters per year. We use both explicit and implicit schemes at stages of elastic energy release and strengthening of shear localization, respectively. Since elastic energy quickly propagates from top and bottom of lithosphere (>1500 km/Myr), the employment of explicit scheme is more suitable to understand thermal runaway effect than implicit scheme at the initial stage of stored elastic energy release (i.e., this stage needs small time step size). The calculation results with only-implicit scheme and explicit-implicit hybrid scheme are different, because the latter can calculate fast time-scale energy dissipation and temperature field better than the former. The small temperature difference between implicit and explicit schemes may cause the large difference at later stage due to thermal-mechanical feedback. To investigate the timing of the initiation of asymmetric instability crossing the interface, temperature and plastic energy distributions are calculated on fine grid composed of shear modulus contrast and activation energy for a period of 1 Myr. We found that asymmetric shear instabilities are induced by a elastic shear modulus contrast over an wide range of

  17. Connector adapter

    NASA Technical Reports Server (NTRS)

    Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)

    2007-01-01

    An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.

  18. Adaptive sampler

    DOEpatents

    Watson, Bobby L.; Aeby, Ian

    1982-01-01

    An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  19. Adaptive sampler

    DOEpatents

    Watson, B.L.; Aeby, I.

    1980-08-26

    An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  20. Analysis of the Turkel-Zwas scheme for the two-dimensional shallow water equations in spherical coordinates

    SciTech Connect

    Neta, B.; Giraldo, F.X.; Navon, I.M.

    1997-05-01

    A linear analysis of the shallow water equations in spherical coordinates for the Turkel-Zwas (T-Z) explicit large time-step scheme is presented. This paper complements the results of Schoenstadt, Neta and Navon, and others in 1-D, and of Neta and DeVito in 2-D, but applied to the spherical coordinate case of the T-Z scheme. This coordinate system is more realistic in meteorology and more complicated to analyze, since the coefficients are no longer constant. The analysis suggests that the T-Z scheme must be staggered in a certain way in order to get eigenvalues and eigenfunctions approaching those of the continuous case. The importance of such an analysis is the fact that it is also valid for nonconstant coefficients and thereby applicable to any numerical scheme. Numerical experiments comparing the original (unstaggered) and staggered versions of the T-Z scheme are presented. These experiments corroborate the analysis by showing the improvements in accuracy gained by staggering the Turkel-Zwas scheme. 14 refs., 5 figs., 1 tab.