Finite difference schemes for long-time integration
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1993-01-01
Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.
Time integration algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Slack, David C.; Whitaker, D. L.; Walters, Robert W.
1994-01-01
Explicit and implicit time integration algorithms for the two-dimensional Euler equations on unstructured grids are presented. Both cell-centered and cell-vertex finite volume upwind schemes utilizing Roe's approximate Riemann solver are developed. For the cell-vertex scheme, a four-stage Runge-Kutta time integration, a fourstage Runge-Kutta time integration with implicit residual averaging, a point Jacobi method, a symmetric point Gauss-Seidel method and two methods utilizing preconditioned sparse matrix solvers are presented. For the cell-centered scheme, a Runge-Kutta scheme, an implicit tridiagonal relaxation scheme modeled after line Gauss-Seidel, a fully implicit lower-upper (LU) decomposition, and a hybrid scheme utilizing both Runge-Kutta and LU methods are presented. A reverse Cuthill-McKee renumbering scheme is employed for the direct solver to decrease CPU time by reducing the fill of the Jacobian matrix. A comparison of the various time integration schemes is made for both first-order and higher order accurate solutions using several mesh sizes, higher order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The results obtained for a transonic flow over a circular arc suggest that the preconditioned sparse matrix solvers perform better than the other methods as the number of elements in the mesh increases.
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay Derivas, E.
1975-01-01
A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.
Development of the Semi-implicit Time Integration in KIM-SH
NASA Astrophysics Data System (ADS)
NAM, H.
2015-12-01
The Korea Institute of Atmospheric Prediction Systems (KIAPS) was founded in 2011 by the Korea Meteorological Administration (KMA) to develop Korea's own global Numerical Weather Prediction (NWP) system as nine year (2011-2019) project. The KIM-SH is a KIAPS integrated model-spectral element based in the HOMME. In KIM-SH, the explicit schemes are employed. We introduce the three- and two-time-level semi-implicit scheme in KIM-SH as the time integration. Explicit schemes however have a tendancy to be unstable and require very small timesteps while semi-implicit schemes are very stable and can have much larger timesteps.We define the linear and reference values, then by definition of semi-implicit scheme, we apply the linear solver as GMRES. The numerical results from experiments will be introduced with the current development status of the time integration in KIM-SH. Several numerical examples are shown to confirm the efficiency and reliability of the proposed schemes.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Shelley, M J; Tao, L
2001-01-01
To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finn, John M., E-mail: finn@lanl.gov
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Finite time step and spatial grid effects in δf simulation of warm plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.
2016-01-15
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less
NASA Technical Reports Server (NTRS)
Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.
1983-01-01
A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
NASA Astrophysics Data System (ADS)
Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.
2016-11-01
This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.
NASA Technical Reports Server (NTRS)
Vess, Melissa F.; Starin, Scott R.
2007-01-01
During design of the SDO Science and Inertial mode PID controllers, the decision was made to disable the integral torque whenever system stability was in question. Three different schemes were developed to determine when to disable or enable the integral torque, and a trade study was performed to determine which scheme to implement. The trade study compared complexity of the control logic, risk of not reenabling the integral gain in time to reject steady-state error, and the amount of integral torque space used. The first scheme calculated a simplified Routh criterion to determine when to disable the integral torque. The second scheme calculates the PD part of the torque and looked to see if that torque would cause actuator saturation. If so, only the PD torque is used. If not, the integral torque is added. Finally, the third scheme compares the attitude and rate errors to limits and disables the integral torque if either of the errors is greater than the limit. Based on the trade study results, the third scheme was selected. Once it was decided when to disable the integral torque, analysis was performed to determine how to disable the integral torque and whether or not to reset the integrator once the integral torque was reenabled. Three ways to disable the integral torque were investigated: zero the input into the integrator, which causes the integral part of the PID control torque to be held constant; zero the integral torque directly but allow the integrator to continue integrating; or zero the integral torque directly and reset the integrator on integral torque reactivation. The analysis looked at complexity of the control logic, slew time plus settling time between each calibration maneuver step, and ability to reject steady-state error. Based on the results of the analysis, the decision was made to zero the input into the integrator without resetting it. Throughout the analysis, a high fidelity simulation was used to test the various implementation methods.
An exponential time-integrator scheme for steady and unsteady inviscid flows
NASA Astrophysics Data System (ADS)
Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili
2018-07-01
An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.
Integrated optical 3D digital imaging based on DSP scheme
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.
2008-03-01
We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaodong; Xia, Yidong; Luo, Hong
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...
2016-10-05
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay-Rivas, E.
1977-01-01
The considered scheme makes it possible to determine an unstable steady state solution in cases in which, because of lack of symmetry, such a solution cannot be obtained analytically, and other time integration or relaxation schemes, because of instability, fail to converge. The iterative solution of a single complex equation is discussed and a nonlinear system of equations is considered. Described applications of the scheme are related to a steady state solution with shear instability, an unstable nonlinear Ekman boundary layer, and the steady state solution of a baroclinic atmosphere with asymmetric forcing. The scheme makes use of forward and backward time integrations of the original spatial differential operators and of an approximation of the adjoint operators. Only two computations of the time derivative per iteration are required.
NASA Astrophysics Data System (ADS)
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.
Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.
2017-10-01
The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.
Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics
NASA Astrophysics Data System (ADS)
d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.
2018-05-01
Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.
Sivak, David A; Chodera, John D; Crooks, Gavin E
2014-06-19
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation
Smith, Peter E.
2006-01-01
A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.
Enabling an Integrated Rate-temporal Learning Scheme on Memristor
NASA Astrophysics Data System (ADS)
He, Wei; Huang, Kejie; Ning, Ning; Ramanathan, Kiruthika; Li, Guoqi; Jiang, Yu; Sze, Jiayin; Shi, Luping; Zhao, Rong; Pei, Jing
2014-04-01
Learning scheme is the key to the utilization of spike-based computation and the emulation of neural/synaptic behaviors toward realization of cognition. The biological observations reveal an integrated spike time- and spike rate-dependent plasticity as a function of presynaptic firing frequency. However, this integrated rate-temporal learning scheme has not been realized on any nano devices. In this paper, such scheme is successfully demonstrated on a memristor. Great robustness against the spiking rate fluctuation is achieved by waveform engineering with the aid of good analog properties exhibited by the iron oxide-based memristor. The spike-time-dependence plasticity (STDP) occurs at moderate presynaptic firing frequencies and spike-rate-dependence plasticity (SRDP) dominates other regions. This demonstration provides a novel approach in neural coding implementation, which facilitates the development of bio-inspired computing systems.
Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests
NASA Astrophysics Data System (ADS)
Toth, G.; Keppens, R.; Botchev, M. A.
1998-04-01
We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.
A parallel time integrator for noisy nonlinear oscillatory systems
NASA Astrophysics Data System (ADS)
Subber, Waad; Sarkar, Abhijit
2018-06-01
In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).
Implicit time accurate simulation of unsteady flow
NASA Astrophysics Data System (ADS)
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Stability of mixed time integration schemes for transient thermal analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lin, J. I.
1982-01-01
A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.
2002-01-01
The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.
A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics
NASA Astrophysics Data System (ADS)
Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno
2017-07-01
In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.
Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional
NASA Astrophysics Data System (ADS)
Song, Jong-Won; Hirao, Kimihiko
2015-07-01
We previously developed an efficient screened hybrid functional called Gaussian-Perdew-Burke-Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals. We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.
Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp
2015-07-14
We previously developed an efficient screened hybrid functional called Gaussian-Perdew–Burke–Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals.more » We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.« less
Sixth- and eighth-order Hermite integrator for N-body simulations
NASA Astrophysics Data System (ADS)
Nitadori, Keigo; Makino, Junichiro
2008-10-01
We present sixth- and eighth-order Hermite integrators for astrophysical N-body simulations, which use the derivatives of accelerations up to second-order ( snap) and third-order ( crackle). These schemes do not require previous values for the corrector, and require only one previous value to construct the predictor. Thus, they are fairly easy to implement. The additional cost of the calculation of the higher-order derivatives is not very high. Even for the eighth-order scheme, the number of floating-point operations for force calculation is only about two times larger than that for traditional fourth-order Hermite scheme. The sixth-order scheme is better than the traditional fourth-order scheme for most cases. When the required accuracy is very high, the eighth-order one is the best. These high-order schemes have several practical advantages. For example, they allow a larger number of particles to be integrated in parallel than the fourth-order scheme does, resulting in higher execution efficiency in both general-purpose parallel computers and GRAPE systems.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
NASA Astrophysics Data System (ADS)
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less
Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J
2014-01-01
We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840
Chiang, Kai-Wei; Chang, Hsiu-Wen; Li, Chia-Yuan; Huang, Yun-Wen
2009-01-01
Digital mobile mapping, which integrates digital imaging with direct geo-referencing, has developed rapidly over the past fifteen years. Direct geo-referencing is the determination of the time-variable position and orientation parameters for a mobile digital imager. The most common technologies used for this purpose today are satellite positioning using Global Positioning System (GPS) and Inertial Navigation System (INS) using an Inertial Measurement Unit (IMU). They are usually integrated in such a way that the GPS receiver is the main position sensor, while the IMU is the main orientation sensor. The Kalman Filter (KF) is considered as the optimal estimation tool for real-time INS/GPS integrated kinematic position and orientation determination. An intelligent hybrid scheme consisting of an Artificial Neural Network (ANN) and KF has been proposed to overcome the limitations of KF and to improve the performance of the INS/GPS integrated system in previous studies. However, the accuracy requirements of general mobile mapping applications can’t be achieved easily, even by the use of the ANN-KF scheme. Therefore, this study proposes an intelligent position and orientation determination scheme that embeds ANN with conventional Rauch-Tung-Striebel (RTS) smoother to improve the overall accuracy of a MEMS INS/GPS integrated system in post-mission mode. By combining the Micro Electro Mechanical Systems (MEMS) INS/GPS integrated system and the intelligent ANN-RTS smoother scheme proposed in this study, a cheaper but still reasonably accurate position and orientation determination scheme can be anticipated. PMID:22574034
Time as a Tool for Policy Analysis in Aging.
ERIC Educational Resources Information Center
Pastorello, Thomas
National policy makers have put forth different life cycle planning proposals for the more satisfying integration of education, work and leisure over the life course. This speech describes a decision making scheme, the Time Paradigm, for researched-based choice among various proposals. The scheme is defined in terms of a typology of time-related…
NASA Technical Reports Server (NTRS)
Bates, J. R.; Semazzi, F. H. M.; Higgins, R. W.; Barros, Saulo R. M.
1990-01-01
A vector semi-Lagrangian semi-implicit two-time-level finite-difference integration scheme for the shallow water equations on the sphere is presented. A C-grid is used for the spatial differencing. The trajectory-centered discretization of the momentum equation in vector form eliminates pole problems and, at comparable cost, gives greater accuracy than a previous semi-Lagrangian finite-difference scheme which used a rotated spherical coordinate system. In terms of the insensitivity of the results to increasing timestep, the new scheme is as successful as recent spectral semi-Lagrangian schemes. In addition, the use of a multigrid method for solving the elliptic equation for the geopotential allows efficient integration with an operation count which, at high resolution, is of lower order than in the case of the spectral models. The properties of the new scheme should allow finite-difference models to compete with spectral models more effectively than has previously been possible.
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing
2015-12-01
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.
Integrated guidance and control for microsatellite real-time automated proximity operations
NASA Astrophysics Data System (ADS)
Chen, Ying; He, Zhen; Zhou, Ding; Yu, Zhenhua; Li, Shunli
2018-07-01
This paper investigates the trajectory planning and control of autonomous spacecraft proximity operations with impulsive dynamics. A new integrated guidance and control scheme is developed to perform automated close-range rendezvous for underactuated microsatellites. To efficiently prevent collision, a modified RRT* trajectory planning algorithm is proposed under this context. Several engineering constraints such as collision avoidance, plume impingement, field of view and control feasibility are considered simultaneously. Then, the feedback controller that employs a turn-burn-turn strategy with a combined impulsive orbital control and finite-time attitude control is designed to ensure the implementation of planned trajectory. Finally, the performance of trajectory planner and controller are evaluated through numerical tests. Simulation results indicate the real-time implementability of the proposed integrated guidance and control scheme with position control error less than 0.5 m and velocity control error less than 0.05 m/s. Consequently, the proposed scheme offers the potential for wide applications, such as on-orbit maintenance, space surveillance and debris removal.
Geometric integration in Born-Oppenheimer molecular dynamics.
Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N
2011-12-14
Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows, I: Basic Theory
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.
2003-01-01
The objective of this paper is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls t o limit the amount of and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvilinear grids. The four advantages of the present approach over existing MHD schemes reported in the open literature are as follows. First, the scheme is constructed for long-time integrations of shock/turbulence/combustion MHD flows. Available schemes are too diffusive for long-time integrations and/or turbulence/combustion problems. Second, unlike exist- ing schemes for the conservative MHD equations which suffer from ill-conditioned eigen- decompositions, the present scheme makes use of a well-conditioned eigen-decomposition obtained from a minor modification of the eigenvectors of the non-conservative MHD equations t o solve the conservative form of the MHD equations. Third, this approach of using the non-conservative eigensystem when solving the conservative equations also works well in the context of standard shock-capturing schemes for the MHD equations. Fourth, a new approach to minimize the numerical error of the divergence-free magnetic condition for high order schemes is introduced. Numerical experiments with typical MHD model problems revealed the applicability of the newly developed schemes for the MHD equations.
Increasing sensitivity of pulse EPR experiments using echo train detection schemes.
Mentink-Vigier, F; Collauto, A; Feintuch, A; Kaminker, I; Tarle, V; Goldfarb, D
2013-11-01
Modern pulse EPR experiments are routinely used to study the structural features of paramagnetic centers. They are usually performed at low temperatures, where relaxation times are long and polarization is high, to achieve a sufficient Signal/Noise Ratio (SNR). However, when working with samples whose amount and/or concentration are limited, sensitivity becomes an issue and therefore measurements may require a significant accumulation time, up to 12h or more. As the detection scheme of practically all pulse EPR sequences is based on the integration of a spin echo--either primary, stimulated or refocused--a considerable increase in SNR can be obtained by replacing the single echo detection scheme by a train of echoes. All these echoes, generated by Carr-Purcell type sequences, are integrated and summed together to improve the SNR. This scheme is commonly used in NMR and here we demonstrate its applicability to a number of frequently used pulse EPR experiments: Echo-Detected EPR, Davies and Mims ENDOR (Electron-Nuclear Double Resonance), DEER (Electron-Electron Double Resonance|) and EDNMR (Electron-Electron Double Resonance (ELDOR)-Detected NMR), which were combined with a Carr-Purcell-Meiboom-Gill (CPMG) type detection scheme at W-band. By collecting the transient signal and integrating a number of refocused echoes, this detection scheme yielded a 1.6-5 folds SNR improvement, depending on the paramagnetic center and the pulse sequence applied. This improvement is achieved while keeping the experimental time constant and it does not introduce signal distortion. Copyright © 2013 Elsevier Inc. All rights reserved.
A progress report on estuary modeling by the finite-element method
Gray, William G.
1978-01-01
Various schemes are investigated for finite-element modeling of two-dimensional surface-water flows. The first schemes investigated combine finite-element spatial discretization with split-step time stepping schemes that have been found useful in finite-difference computations. Because of the large number of numerical integrations performed in space and the large sparse matrices solved, these finite-element schemes were found to be economically uncompetitive with finite-difference schemes. A very promising leapfrog scheme is proposed which, when combined with a novel very fast spatial integration procedure, eliminates the need to solve any matrices at all. Additional problems attacked included proper propagation of waves and proper specification of the normal flow-boundary condition. This report indicates work in progress and does not come to a definitive conclusion as to the best approach for finite-element modeling of surface-water problems. The results presented represent findings obtained between September 1973 and July 1976. (Woodard-USGS)
Multi-symplectic integrators: numerical schemes for Hamiltonian PDEs that conserve symplecticity
NASA Astrophysics Data System (ADS)
Bridges, Thomas J.; Reich, Sebastian
2001-06-01
The symplectic numerical integration of finite-dimensional Hamiltonian systems is a well established subject and has led to a deeper understanding of existing methods as well as to the development of new very efficient and accurate schemes, e.g., for rigid body, constrained, and molecular dynamics. The numerical integration of infinite-dimensional Hamiltonian systems or Hamiltonian PDEs is much less explored. In this Letter, we suggest a new theoretical framework for generalizing symplectic numerical integrators for ODEs to Hamiltonian PDEs in R2: time plus one space dimension. The central idea is that symplecticity for Hamiltonian PDEs is directional: the symplectic structure of the PDE is decomposed into distinct components representing space and time independently. In this setting PDE integrators can be constructed by concatenating uni-directional ODE symplectic integrators. This suggests a natural definition of multi-symplectic integrator as a discretization that conserves a discrete version of the conservation of symplecticity for Hamiltonian PDEs. We show that this approach leads to a general framework for geometric numerical schemes for Hamiltonian PDEs, which have remarkable energy and momentum conservation properties. Generalizations, including development of higher-order methods, application to the Euler equations in fluid mechanics, application to perturbed systems, and extension to more than one space dimension are also discussed.
A splitting integration scheme for the SPH simulation of concentrated particle suspensions
NASA Astrophysics Data System (ADS)
Bian, Xin; Ellero, Marco
2014-01-01
Simulating nearly contacting solid particles in suspension is a challenging task due to the diverging behavior of short-range lubrication forces, which pose a serious time-step limitation for explicit integration schemes. This general difficulty limits severely the total duration of simulations of concentrated suspensions. Inspired by the ideas developed in [S. Litvinov, M. Ellero, X.Y. Hu, N.A. Adams, J. Comput. Phys. 229 (2010) 5457-5464] for the simulation of highly dissipative fluids, we propose in this work a splitting integration scheme for the direct simulation of solid particles suspended in a Newtonian liquid. The scheme separates the contributions of different forces acting on the solid particles. In particular, intermediate- and long-range multi-body hydrodynamic forces, which are computed from the discretization of the Navier-Stokes equations using the smoothed particle hydrodynamics (SPH) method, are taken into account using an explicit integration; for short-range lubrication forces, velocities of pairwise interacting solid particles are updated implicitly by sweeping over all the neighboring pairs iteratively, until convergence in the solution is obtained. By using the splitting integration, simulations can be run stably and efficiently up to very large solid particle concentrations. Moreover, the proposed scheme is not limited to the SPH method presented here, but can be easily applied to other simulation techniques employed for particulate suspensions.
Ocean Variability Effects on Underwater Acoustic Communications
2011-09-01
schemes for accessing wide frequency bands. Compared with OFDM schemes, the multiband MIMO transmission combined with time reversal processing...systems, or multiple- input/multiple-output ( MIMO ) systems, decision feedback equalization and interference cancellation schemes have been integrated...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 MIMO receiver also iterates channel estimation and symbol demodulation with
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
Vernier-like super resolution with guided correlated photon pairs.
Nespoli, Matteo; Goan, Hsi-Sheng; Shih, Min-Hsiung
2016-01-11
We describe a dispersion-enabled, ultra-low power realization of super-resolution in an integrated Mach-Zehnder interferometer. Our scheme is based on a Vernier-like effect in the coincident detection of frequency correlated, non-degenerate photon pairs at the sensor output in the presence of group index dispersion. We design and simulate a realistic integrated refractive index sensor in a silicon nitride on silica platform and characterize its performance in the proposed scheme. We present numerical results showing a sensitivity improvement upward of 40 times over a traditional sensing scheme. The device we design is well within the reach of modern semiconductor fabrication technology. We believe this is the first metrology scheme that uses waveguide group index dispersion as a resource to attain super-resolution.
Seakeeping with the semi-Lagrangian particle finite element method
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio
2017-07-01
The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.
Application of Krylov exponential propagation to fluid dynamics equations
NASA Technical Reports Server (NTRS)
Saad, Youcef; Semeraro, David
1991-01-01
An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.
A computationally efficient scheme for the non-linear diffusion equation
NASA Astrophysics Data System (ADS)
Termonia, P.; Van de Vyver, H.
2009-04-01
This Letter proposes a new numerical scheme for integrating the non-linear diffusion equation. It is shown that it is linearly stable. Some tests are presented comparing this scheme to a popular decentered version of the linearized Crank-Nicholson scheme, showing that, although this scheme is slightly less accurate in treating the highly resolved waves, (i) the new scheme better treats highly non-linear systems, (ii) better handles the short waves, (iii) for a given test bed turns out to be three to four times more computationally cheap, and (iv) is easier in implementation.
NASA Astrophysics Data System (ADS)
Nigro, A.; De Bartolo, C.; Crivellini, A.; Bassi, F.
2017-12-01
In this paper we investigate the possibility of using the high-order accurate A (α) -stable Second Derivative (SD) schemes proposed by Enright for the implicit time integration of the Discontinuous Galerkin (DG) space-discretized Navier-Stokes equations. These multistep schemes are A-stable up to fourth-order, but their use results in a system matrix difficult to compute. Furthermore, the evaluation of the nonlinear function is computationally very demanding. We propose here a Matrix-Free (MF) implementation of Enright schemes that allows to obtain a method without the costs of forming, storing and factorizing the system matrix, which is much less computationally expensive than its matrix-explicit counterpart, and which performs competitively with other implicit schemes, such as the Modified Extended Backward Differentiation Formulae (MEBDF). The algorithm makes use of the preconditioned GMRES algorithm for solving the linear system of equations. The preconditioner is based on the ILU(0) factorization of an approximated but computationally cheaper form of the system matrix, and it has been reused for several time steps to improve the efficiency of the MF Newton-Krylov solver. We additionally employ a polynomial extrapolation technique to compute an accurate initial guess to the implicit nonlinear system. The stability properties of SD schemes have been analyzed by solving a linear model problem. For the analysis on the Navier-Stokes equations, two-dimensional inviscid and viscous test cases, both with a known analytical solution, are solved to assess the accuracy properties of the proposed time integration method for nonlinear autonomous and non-autonomous systems, respectively. The performance of the SD algorithm is compared with the ones obtained by using an MF-MEBDF solver, in order to evaluate its effectiveness, identifying its limitations and suggesting possible further improvements.
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Walker, Kevin P.
1991-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
NASA Technical Reports Server (NTRS)
Chulya, A.; Walker, K. P.
1989-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
A prototype of mammography CADx scheme integrated to imaging quality evaluation techniques
NASA Astrophysics Data System (ADS)
Schiabel, Homero; Matheus, Bruno R. N.; Angelo, Michele F.; Patrocínio, Ana Claudia; Ventura, Liliane
2011-03-01
As all women over the age of 40 are recommended to perform mammographic exams every two years, the demands on radiologists to evaluate mammographic images in short periods of time has increased considerably. As a tool to improve quality and accelerate analysis CADe/Dx (computer-aided detection/diagnosis) schemes have been investigated, but very few complete CADe/Dx schemes have been developed and most are restricted to detection and not diagnosis. The existent ones usually are associated to specific mammographic equipment (usually DR), which makes them very expensive. So this paper describes a prototype of a complete mammography CADx scheme developed by our research group integrated to an imaging quality evaluation process. The basic structure consists of pre-processing modules based on image acquisition and digitization procedures (FFDM, CR or film + scanner), a segmentation tool to detect clustered microcalcifications and suspect masses and a classification scheme, which evaluates as the presence of microcalcifications clusters as well as possible malignant masses based on their contour. The aim is to provide enough information not only on the detected structures but also a pre-report with a BI-RADS classification. At this time the system is still lacking an interface integrating all the modules. Despite this, it is functional as a prototype for clinical practice testing, with results comparable to others reported in literature.
Three-dimensional simulation of vortex breakdown
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Salas, M. D.
1990-01-01
The integral form of the complete, unsteady, compressible, three-dimensional Navier-Stokes equations in the conservation form, cast in generalized coordinate system, are solved, numerically, to simulate the vortex breakdown phenomenon. The inviscid fluxes are discretized using Roe's upwind-biased flux-difference splitting scheme and the viscous fluxes are discretized using central differencing. Time integration is performed using a backward Euler ADI (alternating direction implicit) scheme. A full approximation multigrid is used to accelerate the convergence to steady state.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundt, Michael; Kuemmel, Stephan
2006-08-15
The integral equation for the time-dependent optimized effective potential (TDOEP) in time-dependent density-functional theory is transformed into a set of partial-differential equations. These equations only involve occupied Kohn-Sham orbitals and orbital shifts resulting from the difference between the exchange-correlation potential and the orbital-dependent potential. Due to the success of an analog scheme in the static case, a scheme that propagates orbitals and orbital shifts in real time is a natural candidate for an exact solution of the TDOEP equation. We investigate the numerical stability of such a scheme. An approximation beyond the Krieger-Li-Iafrate approximation for the time-dependent exchange-correlation potential ismore » analyzed.« less
NASA Astrophysics Data System (ADS)
Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin
2018-09-01
A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.
Harvesting model uncertainty for the simulation of interannual variability
NASA Astrophysics Data System (ADS)
Misra, Vasubandhu
2009-08-01
An innovative modeling strategy is introduced to account for uncertainty in the convective parameterization (CP) scheme of a coupled ocean-atmosphere model. The methodology involves calling the CP scheme several times at every given time step of the model integration to pick the most probable convective state. Each call of the CP scheme is unique in that one of its critical parameter values (which is unobserved but required by the scheme) is chosen randomly over a given range. This methodology is tested with the relaxed Arakawa-Schubert CP scheme in the Center for Ocean-Land-Atmosphere Studies (COLA) coupled general circulation model (CGCM). Relative to the control COLA CGCM, this methodology shows improvement in the El Niño-Southern Oscillation simulation and the Indian summer monsoon precipitation variability.
On modeling of integrated communication and control systems
NASA Technical Reports Server (NTRS)
Liou, Luen-Woei; Ray, Asok
1990-01-01
The mathematical modeling scheme proposed by Ray and Halevi (1988) for integrated communication and control systems is considered analytically, with an emphasis on the effect of introducing varying and distributed time delays to account for asynchronous time-division multiplexing in the communication part of the system. Ray and Halevi applied a state-transition concept to transform the original continuous-time model into a discrete-time model; the same approach was used by Kalman and Bertram (1959) to model various types of sampled data systems which are not subject to induced delays. The relationship between the two modeling schemes is explored, and it is shown that, although the Kalman-Bertram method has the advantage of a unified approach, it becomes inconvenient when varying delays appear in the control loop.
Rajagopalan, S. P.
2017-01-01
Certificateless-based signcryption overcomes inherent shortcomings in traditional Public Key Infrastructure (PKI) and Key Escrow problem. It imparts efficient methods to design PKIs with public verifiability and cipher text authenticity with minimum dependency. As a classic primitive in public key cryptography, signcryption performs validity of cipher text without decryption by combining authentication, confidentiality, public verifiability and cipher text authenticity much more efficiently than the traditional approach. In this paper, we first define a security model for certificateless-based signcryption called, Complex Conjugate Differential Integrated Factor (CC-DIF) scheme by introducing complex conjugates through introduction of the security parameter and improving secured message distribution rate. However, both partial private key and secret value changes with respect to time. To overcome this weakness, a new certificateless-based signcryption scheme is proposed by setting the private key through Differential (Diff) Equation using an Integration Factor (DiffEIF), minimizing computational cost and communication overhead. The scheme is therefore said to be proven secure (i.e. improving the secured message distributing rate) against certificateless access control and signcryption-based scheme. In addition, compared with the three other existing schemes, the CC-DIF scheme has the least computational cost and communication overhead for secured message communication in mobile network. PMID:29040290
Alagarsamy, Sumithra; Rajagopalan, S P
2017-01-01
Certificateless-based signcryption overcomes inherent shortcomings in traditional Public Key Infrastructure (PKI) and Key Escrow problem. It imparts efficient methods to design PKIs with public verifiability and cipher text authenticity with minimum dependency. As a classic primitive in public key cryptography, signcryption performs validity of cipher text without decryption by combining authentication, confidentiality, public verifiability and cipher text authenticity much more efficiently than the traditional approach. In this paper, we first define a security model for certificateless-based signcryption called, Complex Conjugate Differential Integrated Factor (CC-DIF) scheme by introducing complex conjugates through introduction of the security parameter and improving secured message distribution rate. However, both partial private key and secret value changes with respect to time. To overcome this weakness, a new certificateless-based signcryption scheme is proposed by setting the private key through Differential (Diff) Equation using an Integration Factor (DiffEIF), minimizing computational cost and communication overhead. The scheme is therefore said to be proven secure (i.e. improving the secured message distributing rate) against certificateless access control and signcryption-based scheme. In addition, compared with the three other existing schemes, the CC-DIF scheme has the least computational cost and communication overhead for secured message communication in mobile network.
Chiang, Kai-Wei; Duong, Thanh Trung; Liao, Jhen-Kai
2013-01-01
The integration of an Inertial Navigation System (INS) and the Global Positioning System (GPS) is common in mobile mapping and navigation applications to seamlessly determine the position, velocity, and orientation of the mobile platform. In most INS/GPS integrated architectures, the GPS is considered to be an accurate reference with which to correct for the systematic errors of the inertial sensors, which are composed of biases, scale factors and drift. However, the GPS receiver may produce abnormal pseudo-range errors mainly caused by ionospheric delay, tropospheric delay and the multipath effect. These errors degrade the overall position accuracy of an integrated system that uses conventional INS/GPS integration strategies such as loosely coupled (LC) and tightly coupled (TC) schemes. Conventional tightly coupled INS/GPS integration schemes apply the Klobuchar model and the Hopfield model to reduce pseudo-range delays caused by ionospheric delay and tropospheric delay, respectively, but do not address the multipath problem. However, the multipath effect (from reflected GPS signals) affects the position error far more significantly in a consumer-grade GPS receiver than in an expensive, geodetic-grade GPS receiver. To avoid this problem, a new integrated INS/GPS architecture is proposed. The proposed method is described and applied in a real-time integrated system with two integration strategies, namely, loosely coupled and tightly coupled schemes, respectively. To verify the effectiveness of the proposed method, field tests with various scenarios are conducted and the results are compared with a reliable reference system. PMID:23955434
Should learners reason one step at a time? A randomised trial of two diagnostic scheme designs.
Blissett, Sarah; Morrison, Deric; McCarty, David; Sibbald, Matthew
2017-04-01
Making a diagnosis can be difficult for learners as they must integrate multiple clinical variables. Diagnostic schemes can help learners with this complex task. A diagnostic scheme is an algorithm that organises possible diagnoses by assigning signs or symptoms (e.g. systolic murmur) to groups of similar diagnoses (e.g. aortic stenosis and aortic sclerosis) and provides distinguishing features to help discriminate between similar diagnoses (e.g. carotid pulse). The current literature does not identify whether scheme layouts should guide learners to reason one step at a time in a terminally branching scheme or weigh multiple variables simultaneously in a hybrid scheme. We compared diagnostic accuracy, perceptual errors and cognitive load using two scheme layouts for cardiac auscultation. Focused on the task of identifying murmurs on Harvey, a cardiopulmonary simulator, 86 internal medicine residents used two scheme layouts. The terminally branching scheme organised the information into single variable decisions. The hybrid scheme combined single variable decisions with a chart integrating multiple distinguishing features. Using a crossover design, participants completed one set of murmurs (diastolic or systolic) with either the terminally branching or the hybrid scheme. The second set of murmurs was completed with the other scheme. A repeated measures manova was performed to compare diagnostic accuracy, perceptual errors and cognitive load between the scheme layouts. There was a main effect of the scheme layout (Wilks' λ = 0.841, F 3,80 = 5.1, p = 0.003). Use of a terminally branching scheme was associated with increased diagnostic accuracy (65 versus 53%, p = 0.02), fewer perceptual errors (0.61 versus 0.98 errors, p = 0.001) and lower cognitive load (3.1 versus 3.5/7, p = 0.023). The terminally branching scheme was associated with improved diagnostic accuracy, fewer perceptual errors and lower cognitive load, suggesting that terminally branching schemes are effective for improving diagnostic accuracy. These findings can inform the design of schemes and other clinical decision aids. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Optimal Runge-Kutta Schemes for High-order Spatial and Temporal Discretizations
2015-06-01
using larger time steps versus lower-order time integration with smaller time steps.4 In the present work, an attempt is made to gener - alize these... generality and because of interest in multi-speed and high Reynolds number, wall-bounded flow regimes, a dual-time framework is adopted in the present work...errors of general combinations of high-order spatial and temporal discretizations. Different Runge-Kutta time integrators are applied to central
Analysis of adaptive algorithms for an integrated communication network
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Barr, Matthew; Chong-Kwon, Kim
1985-01-01
Techniques were examined that trade communication bandwidth for decreased transmission delays. When the network is lightly used, these schemes attempt to use additional network resources to decrease communication delays. As the network utilization rises, the schemes degrade gracefully, still providing service but with minimal use of the network. Because the schemes use a combination of circuit and packet switching, they should respond to variations in the types and amounts of network traffic. Also, a combination of circuit and packet switching to support the widely varying traffic demands imposed on an integrated network was investigated. The packet switched component is best suited to bursty traffic where some delays in delivery are acceptable. The circuit switched component is reserved for traffic that must meet real time constraints. Selected packet routing algorithms that might be used in an integrated network were simulated. An integrated traffic places widely varying workload demands on a network. Adaptive algorithms were identified, ones that respond to both the transient and evolutionary changes that arise in integrated networks. A new algorithm was developed, hybrid weighted routing, that adapts to workload changes.
BossPro: a biometrics-based obfuscation scheme for software protection
NASA Astrophysics Data System (ADS)
Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham
2013-05-01
This paper proposes to integrate biometric-based key generation into an obfuscated interpretation algorithm to protect authentication application software from illegitimate use or reverse-engineering. This is especially necessary for mCommerce because application programmes on mobile devices, such as Smartphones and Tablet-PCs are typically open for misuse by hackers. Therefore, the scheme proposed in this paper ensures that a correct interpretation / execution of the obfuscated program code of the authentication application requires a valid biometric generated key of the actual person to be authenticated, in real-time. Without this key, the real semantics of the program cannot be understood by an attacker even if he/she gains access to this application code. Furthermore, the security provided by this scheme can be a vital aspect in protecting any application running on mobile devices that are increasingly used to perform business/financial or other security related applications, but are easily lost or stolen. The scheme starts by creating a personalised copy of any application based on the biometric key generated during an enrolment process with the authenticator as well as a nuance created at the time of communication between the client and the authenticator. The obfuscated code is then shipped to the client's mobile devise and integrated with real-time biometric extracted data of the client to form the unlocking key during execution. The novelty of this scheme is achieved by the close binding of this application program to the biometric key of the client, thus making this application unusable for others. Trials and experimental results on biometric key generation, based on client's faces, and an implemented scheme prototype, based on the Android emulator, prove the concept and novelty of this proposed scheme.
NASA Astrophysics Data System (ADS)
Savin, Andrei V.; Smirnov, Petr G.
2018-05-01
Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Jorgenson, Philip C. E.
2007-01-01
A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.
Exponential integrators in time-dependent density-functional calculations
NASA Astrophysics Data System (ADS)
Kidd, Daniel; Covington, Cody; Varga, Kálmán
2017-12-01
The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.
A robust and high-performance queue management controller for large round trip time networks
NASA Astrophysics Data System (ADS)
Khoshnevisan, Ladan; Salmasi, Farzad R.
2016-05-01
Congestion management for transmission control protocol is of utmost importance to prevent packet loss within a network. This necessitates strategies for active queue management. The most applied active queue management strategies have their inherent disadvantages which lead to suboptimal performance and even instability in the case of large round trip time and/or external disturbance. This paper presents an internal model control robust queue management scheme with two degrees of freedom in order to restrict the undesired effects of large and small round trip time and parameter variations in the queue management. Conventional approaches such as proportional integral and random early detection procedures lead to unstable behaviour due to large delay. Moreover, internal model control-Smith scheme suffers from large oscillations due to the large round trip time. On the other hand, other schemes such as internal model control-proportional integral and derivative show excessive sluggish performance for small round trip time values. To overcome these shortcomings, we introduce a system entailing two individual controllers for queue management and disturbance rejection, simultaneously. Simulation results based on Matlab/Simulink and also Network Simulator 2 (NS2) demonstrate the effectiveness of the procedure and verify the analytical approach.
Liu, Chongxin; Liu, Hang
2017-01-01
This paper presents a continuous composite control scheme to achieve fixed-time stabilization for nonlinear systems with mismatched disturbances. The composite controller is constructed in two steps: First, uniformly finite time exact disturbance observers are proposed to estimate and compensate the disturbances. Then, based on adding a power integrator technique and fixed-time stability theory, continuous fixed-time stable state feedback controller and Lyapunov functions are constructed to achieve global fixed-time system stabilization. The proposed control method extends the existing fixed-time stable control results to high order nonlinear systems with mismatched disturbances and achieves global fixed-time system stabilization. Besides, the proposed control scheme improves the disturbance rejection performance and achieves performance recovery of nominal system. Simulation results are provided to show the effectiveness, the superiority and the applicability of the proposed control scheme. PMID:28406966
An Operator-Integration-Factor Splitting (OIFS) method for Incompressible Flows in Moving Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Saumil S.; Fischer, Paul F.; Min, Misun
In this paper, we present a characteristic-based numerical procedure for simulating incompressible flows in domains with moving boundaries. Our approach utilizes an operator-integration-factor splitting technique to help produce an effcient and stable numerical scheme. Using the spectral element method and an arbitrary Lagrangian-Eulerian formulation, we investigate flows where the convective acceleration effects are non-negligible. Several examples, ranging from laminar to turbulent flows, are considered. Comparisons with a standard, semi-implicit time-stepping procedure illustrate the improved performance of the scheme.
NASA Astrophysics Data System (ADS)
Lu, Tiao; Cai, Wei
2008-10-01
In this paper, we propose a high order Fourier spectral-discontinuous Galerkin method for time-dependent Schrödinger-Poisson equations in 3-D spaces. The Fourier spectral Galerkin method is used for the two periodic transverse directions and a high order discontinuous Galerkin method for the longitudinal propagation direction. Such a combination results in a diagonal form for the differential operators along the transverse directions and a flexible method to handle the discontinuous potentials present in quantum heterojunction and supperlattice structures. As the derivative matrices are required for various time integration schemes such as the exponential time differencing and Crank Nicholson methods, explicit derivative matrices of the discontinuous Galerkin method of various orders are derived. Numerical results, using the proposed method with various time integration schemes, are provided to validate the method.
Lee, Tian-Fu; Chang, I-Pin; Lin, Tsung-Hung; Wang, Ching-Cheng
2013-06-01
The integrated EPR information system supports convenient and rapid e-medicine services. A secure and efficient authentication scheme for the integrated EPR information system provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Wu et al. proposed an efficient password-based user authentication scheme using smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various malicious attacks. However, their scheme is still vulnerable to lost smart card and stolen verifier attacks. This investigation discusses these weaknesses and proposes a secure and efficient authentication scheme for the integrated EPR information system as alternative. Compared with related approaches, the proposed scheme not only retains a lower computational cost and does not require verifier tables for storing users' secrets, but also solves the security problems in previous schemes and withstands possible attacks.
NASA Astrophysics Data System (ADS)
Lafitte, Pauline; Melis, Ward; Samaey, Giovanni
2017-07-01
We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.
2006-09-01
Umj) flj + GjE(Umj)flyjI A S + fS do (3.7)I This system (3.6) is integrated in time using explicit low-memory Runge-Kutta method: I U o=U" Ui =UO - ci At...signals are registered by the four-channel digital memory oscilloscopes Tektronix TDS 2414 and ASK 3107. Scheme of operation The scheme of the experiment is
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
Two-dimensional Euler and Navier-Stokes Time accurate simulations of fan rotor flows
NASA Technical Reports Server (NTRS)
Boretti, A. A.
1990-01-01
Two numerical methods are presented which describe the unsteady flow field in the blade-to-blade plane of an axial fan rotor. These methods solve the compressible, time-dependent, Euler and the compressible, turbulent, time-dependent, Navier-Stokes conservation equations for mass, momentum, and energy. The Navier-Stokes equations are written in Favre-averaged form and are closed with an approximate two-equation turbulence model with low Reynolds number and compressibility effects included. The unsteady aerodynamic component is obtained by superposing inflow or outflow unsteadiness to the steady conditions through time-dependent boundary conditions. The integration in space is performed by using a finite volume scheme, and the integration in time is performed by using k-stage Runge-Kutta schemes, k = 2,5. The numerical integration algorithm allows the reduction of the computational cost of an unsteady simulation involving high frequency disturbances in both CPU time and memory requirements. Less than 200 sec of CPU time are required to advance the Euler equations in a computational grid made up of about 2000 grid during 10,000 time steps on a CRAY Y-MP computer, with a required memory of less than 0.3 megawords.
An unconditionally stable Runge-Kutta method for unsteady flows
NASA Technical Reports Server (NTRS)
Jorgenson, Philip C. E.; Chima, Rodrick V.
1988-01-01
A quasi-three dimensional analysis was developed for unsteady rotor-stator interaction in turbomachinery. The analysis solves the unsteady Euler or thin-layer Navier-Stokes equations in a body fitted coordinate system. It accounts for the effects of rotation, radius change, and stream surface thickness. The Baldwin-Lomax eddy viscosity model is used for turbulent flows. The equations are integrated in time using a four stage Runge-Kutta scheme with a constant time step. Implicit residual smoothing was employed to accelerate the solution of the time accurate computations. The scheme is described and accuracy analyses are given. Results are shown for a supersonic through-flow fan designed for NASA Lewis. The rotor:stator blade ratio was taken as 1:1. Results are also shown for the first stage of the Space Shuttle Main Engine high pressure fuel turbopump. Here the blade ratio is 2:3. Implicit residual smoothing was used to increase the time step limit of the unsmoothed scheme by a factor of six with negligible differences in the unsteady results. It is felt that the implicitly smoothed Runge-Kutta scheme is easily competitive with implicit schemes for unsteady flows while retaining the simplicity of an explicit scheme.
Exponential Methods for the Time Integration of Schroedinger Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
An integral equation formulation for rigid bodies in Stokes flow in three dimensions
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan
2017-03-01
We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.
A simple molecular mechanics integrator in mixed rigid body and dihedral angle space
Vitalis, Andreas; Pappu, Rohit V.
2014-01-01
We propose a numerical scheme to integrate equations of motion in a mixed space of rigid-body and dihedral angle coordinates. The focus of the presentation is biomolecular systems and the framework is applicable to polymers with tree-like topology. By approximating the effective mass matrix as diagonal and lumping all bias torques into the time dependencies of the diagonal elements, we take advantage of the formal decoupling of individual equations of motion. We impose energy conservation independently for every degree of freedom and this is used to derive a numerical integration scheme. The cost of all auxiliary operations is linear in the number of atoms. By coupling the scheme to one of two popular thermostats, we extend the method to sample constant temperature ensembles. We demonstrate that the integrator of choice yields satisfactory stability and is free of mass-metric tensor artifacts, which is expected by construction of the algorithm. Two fundamentally different systems, viz., liquid water and an α-helical peptide in a continuum solvent are used to establish the applicability of our method to a wide range of problems. The resultant constant temperature ensembles are shown to be thermodynamically accurate. The latter relies on detailed, quantitative comparisons to data from reference sampling schemes operating on exactly the same sets of degrees of freedom. PMID:25053299
Gómez Pueyo, Adrián; Marques, Miguel A L; Rubio, Angel; Castro, Alberto
2018-05-09
We examine various integration schemes for the time-dependent Kohn-Sham equations. Contrary to the time-dependent Schrödinger's equation, this set of equations is nonlinear, due to the dependence of the Hamiltonian on the electronic density. We discuss some of their exact properties, and in particular their symplectic structure. Four different families of propagators are considered, specifically the linear multistep, Runge-Kutta, exponential Runge-Kutta, and the commutator-free Magnus schemes. These have been chosen because they have been largely ignored in the past for time-dependent electronic structure calculations. The performance is analyzed in terms of cost-versus-accuracy. The clear winner, in terms of robustness, simplicity, and efficiency is a simplified version of a fourth-order commutator-free Magnus integrator. However, in some specific cases, other propagators, such as some implicit versions of the multistep methods, may be useful.
NASA Astrophysics Data System (ADS)
D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato
2018-01-01
Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.
A solid reactor core thermal model for nuclear thermal rockets
NASA Astrophysics Data System (ADS)
Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.
1991-01-01
A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.
NASA Astrophysics Data System (ADS)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-01
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
Silva, Bhagya Nathali; Khan, Murad; Han, Kijun
2018-02-25
The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism.
Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus
2007-01-01
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.
Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.
An Energy Decaying Scheme for Nonlinear Dynamics of Shells
NASA Technical Reports Server (NTRS)
Bottasso, Carlo L.; Bauchau, Olivier A.; Choi, Jou-Young; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A novel integration scheme for nonlinear dynamics of geometrically exact shells is developed based on the inextensible director assumption. The new algorithm is designed so as to imply the strict decay of the system total mechanical energy at each time step, and consequently unconditional stability is achieved in the nonlinear regime. Furthermore, the scheme features tunable high frequency numerical damping and it is therefore stiffly accurate. The method is tested for a finite element spatial formulation of shells based on mixed interpolations of strain tensorial components and on a two-parameter representation of director rotations. The robustness of the, scheme is illustrated with the help of numerical examples.
Wang, Yujuan; Song, Yongduan; Ren, Wei
2017-07-06
This paper presents a distributed adaptive finite-time control solution to the formation-containment problem for multiple networked systems with uncertain nonlinear dynamics and directed communication constraints. By integrating the special topology feature of the new constructed symmetrical matrix, the technical difficulty in finite-time formation-containment control arising from the asymmetrical Laplacian matrix under single-way directed communication is circumvented. Based upon fractional power feedback of the local error, an adaptive distributed control scheme is established to drive the leaders into the prespecified formation configuration in finite time. Meanwhile, a distributed adaptive control scheme, independent of the unavailable inputs of the leaders, is designed to keep the followers within a bounded distance from the moving leaders and then to make the followers enter the convex hull shaped by the formation of the leaders in finite time. The effectiveness of the proposed control scheme is confirmed by the simulation.
Numerical simulation of conservation laws
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; To, Wai-Ming
1992-01-01
A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.
NASA Astrophysics Data System (ADS)
Chang, Chueh-Hsin; Yu, Ching-Hao; Sheu, Tony Wen-Hann
2016-10-01
In this article, we numerically revisit the long-time solution behavior of the Camassa-Holm equation ut - uxxt + 2ux + 3uux = 2uxuxx + uuxxx. The finite difference solution of this integrable equation is sought subject to the newly derived initial condition with Delta-function potential. Our underlying strategy of deriving a numerical phase accurate finite difference scheme in time domain is to reduce the numerical dispersion error through minimization of the derived discrepancy between the numerical and exact modified wavenumbers. Additionally, to achieve the goal of conserving Hamiltonians in the completely integrable equation of current interest, a symplecticity-preserving time-stepping scheme is developed. Based on the solutions computed from the temporally symplecticity-preserving and the spatially wavenumber-preserving schemes, the long-time asymptotic CH solution characters can be accurately depicted in distinct regions of the space-time domain featuring with their own quantitatively very different solution behaviors. We also aim to numerically confirm that in the two transition zones their long-time asymptotics can indeed be described in terms of the theoretically derived Painlevé transcendents. Another attempt of this study is to numerically exhibit a close connection between the presently predicted finite-difference solution and the solution of the Painlevé ordinary differential equation of type II in two different transition zones.
Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi; Wang, Chun-Cheng
2015-11-01
To protect patient privacy and ensure authorized access to remote medical services, many remote user authentication schemes for the integrated electronic patient record (EPR) information system have been proposed in the literature. In a recent paper, Das proposed a hash based remote user authentication scheme using passwords and smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various passive and active attacks. However, in this paper, we found that Das's authentication scheme is still vulnerable to modification and user duplication attacks. Thereafter we propose a secure and efficient authentication scheme for the integrated EPR information system based on lightweight hash function and bitwise exclusive-or (XOR) operations. The security proof and performance analysis show our new scheme is well-suited to adoption in remote medical healthcare services.
ICASE Semiannual Report, October 1, 1992 through March 31, 1993
1993-06-01
NUMERICAL MATHEMATICS Saul Abarbanel Further results have been obtained regarding long time integration of high order compact finite difference schemes...overall accuracy. These problems are common to all numerical methods: finite differences , finite elements and spectral methods. It should be noted that...fourth order finite difference scheme. * In the same case, the D6 wavelets provide a sixth order finite difference , noncompact formula. * The wavelets
A more secure anonymous user authentication scheme for the integrated EPR information system.
Wen, Fengtong
2014-05-01
Secure and efficient user mutual authentication is an essential task for integrated electronic patient record (EPR) information system. Recently, several authentication schemes have been proposed to meet this requirement. In a recent paper, Lee et al. proposed an efficient and secure password-based authentication scheme used smart cards for the integrated EPR information system. This scheme is believed to have many abilities to resist a range of network attacks. Especially, they claimed that their scheme could resist lost smart card attack. However, we reanalyze the security of Lee et al.'s scheme, and show that it fails to protect off-line password guessing attack if the secret information stored in the smart card is compromised. This also renders that their scheme is insecure against user impersonation attacks. Then, we propose a new user authentication scheme for integrated EPR information systems based on the quadratic residues. The new scheme not only resists a range of network attacks but also provides user anonymity. We show that our proposed scheme can provide stronger security.
NASA Astrophysics Data System (ADS)
Xia, Weiwei; Shen, Lianfeng
We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.
An interactive adaptive remeshing algorithm for the two-dimensional Euler equations
NASA Technical Reports Server (NTRS)
Slack, David C.; Walters, Robert W.; Lohner, R.
1990-01-01
An interactive adaptive remeshing algorithm utilizing a frontal grid generator and a variety of time integration schemes for the two-dimensional Euler equations on unstructured meshes is presented. Several device dependent interactive graphics interfaces have been developed along with a device independent DI-3000 interface which can be employed on any computer that has the supporting software including the Cray-2 supercomputers Voyager and Navier. The time integration methods available include: an explicit four stage Runge-Kutta and a fully implicit LU decomposition. A cell-centered finite volume upwind scheme utilizing Roe's approximate Riemann solver is developed. To obtain higher order accurate results a monotone linear reconstruction procedure proposed by Barth is utilized. Results for flow over a transonic circular arc and flow through a supersonic nozzle are examined.
NASA Astrophysics Data System (ADS)
Lemarié, F.; Debreu, L.
2016-02-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.
NASA Astrophysics Data System (ADS)
Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.
2017-05-01
We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.
A simple molecular mechanics integrator in mixed rigid body and dihedral angle space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitalis, Andreas, E-mail: a.vitalis@bioc.uzh.ch; Pappu, Rohit V.
2014-07-21
We propose a numerical scheme to integrate equations of motion in a mixed space of rigid-body and dihedral angle coordinates. The focus of the presentation is biomolecular systems and the framework is applicable to polymers with tree-like topology. By approximating the effective mass matrix as diagonal and lumping all bias torques into the time dependencies of the diagonal elements, we take advantage of the formal decoupling of individual equations of motion. We impose energy conservation independently for every degree of freedom and this is used to derive a numerical integration scheme. The cost of all auxiliary operations is linear inmore » the number of atoms. By coupling the scheme to one of two popular thermostats, we extend the method to sample constant temperature ensembles. We demonstrate that the integrator of choice yields satisfactory stability and is free of mass-metric tensor artifacts, which is expected by construction of the algorithm. Two fundamentally different systems, viz., liquid water and an α-helical peptide in a continuum solvent are used to establish the applicability of our method to a wide range of problems. The resultant constant temperature ensembles are shown to be thermodynamically accurate. The latter relies on detailed, quantitative comparisons to data from reference sampling schemes operating on exactly the same sets of degrees of freedom.« less
An Efficient Scheduling Scheme on Charging Stations for Smart Transportation
NASA Astrophysics Data System (ADS)
Kim, Hye-Jin; Lee, Junghoon; Park, Gyung-Leen; Kang, Min-Jae; Kang, Mikyung
This paper proposes a reservation-based scheduling scheme for the charging station to decide the service order of multiple requests, aiming at improving the satisfiability of electric vehicles. The proposed scheme makes it possible for a customer to reduce the charge cost and waiting time, while a station can extend the number of clients it can serve. A linear rank function is defined based on estimated arrival time, waiting time bound, and the amount of needed power, reducing the scheduling complexity. Receiving the requests from the clients, the power station decides the charge order by the rank function and then replies to the requesters with the waiting time and cost it can guarantee. Each requester can decide whether to charge at that station or try another station. This scheduler can evolve to integrate a new pricing policy and services, enriching the electric vehicle transport system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannetti, C.; Becker, R.
The software is an ABAQUS/Standard UMAT (user defined material behavior subroutine) that implements the constitutive model for shape-memory alloy materials developed by Jannetti et. al. (2003a) using a fully implicit time integration scheme to integrate the constitutive equations. The UMAT is used in conjunction with ABAQUS/Standard to perform a finite-element analysis of SMA materials.
Receding horizon online optimization for torque control of gasoline engines.
Kang, Mingxin; Shen, Tielong
2016-11-01
This paper proposes a model-based nonlinear receding horizon optimal control scheme for the engine torque tracking problem. The controller design directly employs the nonlinear model exploited based on mean-value modeling principle of engine systems without any linearizing reformation, and the online optimization is achieved by applying the Continuation/GMRES (generalized minimum residual) approach. Several receding horizon control schemes are designed to investigate the effects of the integral action and integral gain selection. Simulation analyses and experimental validations are implemented to demonstrate the real-time optimization performance and control effects of the proposed torque tracking controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3
NASA Technical Reports Server (NTRS)
Chakravarthy, Sukumar R.
1990-01-01
An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.
Silva, Bhagya Nathali; Khan, Murad; Han, Kijun
2018-01-01
The emergence of smart devices and smart appliances has highly favored the realization of the smart home concept. Modern smart home systems handle a wide range of user requirements. Energy management and energy conservation are in the spotlight when deploying sophisticated smart homes. However, the performance of energy management systems is highly influenced by user behaviors and adopted energy management approaches. Appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption. Hence, we propose a smart home energy management system that reduces unnecessary energy consumption by integrating an automated switching off system with load balancing and appliance scheduling algorithm. The load balancing scheme acts according to defined constraints such that the cumulative energy consumption of the household is managed below the defined maximum threshold. The scheduling of appliances adheres to the least slack time (LST) algorithm while considering user comfort during scheduling. The performance of the proposed scheme has been evaluated against an existing energy management scheme through computer simulation. The simulation results have revealed a significant improvement gained through the proposed LST-based energy management scheme in terms of cost of energy, along with reduced domestic energy consumption facilitated by an automated switching off mechanism. PMID:29495346
Coherence rephasing combined with spin-wave storage using chirped control pulses
NASA Astrophysics Data System (ADS)
Demeter, Gabor
2014-06-01
Photon-echo based optical quantum memory schemes often employ intermediate steps to transform optical coherences to spin coherences for longer storage times. We analyze a scheme that uses three identical chirped control pulses for coherence rephasing in an inhomogeneously broadened ensemble of three-level Λ systems. The pulses induce a cyclic permutation of the atomic populations in the adiabatic regime. Optical coherences created by a signal pulse are stored as spin coherences at an intermediate time interval, and are rephased for echo emission when the ensemble is returned to the initial state. Echo emission during a possible partial rephasing when the medium is inverted can be suppressed with an appropriate choice of control pulse wave vectors. We demonstrate that the scheme works in an optically dense ensemble, despite control pulse distortions during propagation. It integrates conveniently the spin-wave storage step into memory schemes based on a second rephasing of the atomic coherences.
Numerical Investigation of a Model Scramjet Combustor Using DDES
NASA Astrophysics Data System (ADS)
Shin, Junsu; Sung, Hong-Gye
2017-04-01
Non-reactive flows moving through a model scramjet were investigated using a delayed detached eddy simulation (DDES), which is a hybrid scheme combining Reynolds averaged Navier-Stokes scheme and a large eddy simulation. The three dimensional Navier-Stokes equations were solved numerically on a structural grid using finite volume methods. An in-house was developed. This code used a monotonic upstream-centered scheme for conservation laws (MUSCL) with an advection upstream splitting method by pressure weight function (AUSMPW+) for space. In addition, a 4th order Runge-Kutta scheme was used with preconditioning for time integration. The geometries and boundary conditions of a scramjet combustor operated by DLR, a German aerospace center, were considered. The profiles of the lower wall pressure and axial velocity obtained from a time-averaged solution were compared with experimental results. Also, the mixing efficiency and total pressure recovery factor were provided in order to inspect the performance of the combustor.
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
2017-02-05
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
NASA Technical Reports Server (NTRS)
Bates, J. R.; Moorthi, S.; Higgins, R. W.
1993-01-01
An adiabatic global multilevel primitive equation model using a two time-level, semi-Lagrangian semi-implicit finite-difference integration scheme is presented. A Lorenz grid is used for vertical discretization and a C grid for the horizontal discretization. The momentum equation is discretized in vector form, thus avoiding problems near the poles. The 3D model equations are reduced by a linear transformation to a set of 2D elliptic equations, whose solution is found by means of an efficient direct solver. The model (with minimal physics) is integrated for 10 days starting from an initialized state derived from real data. A resolution of 16 levels in the vertical is used, with various horizontal resolutions. The model is found to be stable and efficient, and to give realistic output fields. Integrations with time steps of 10 min, 30 min, and 1 h are compared, and the differences are found to be acceptable.
A family of compact high order coupled time-space unconditionally stable vertical advection schemes
NASA Astrophysics Data System (ADS)
Lemarié, Florian; Debreu, Laurent
2016-04-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.
NASA Astrophysics Data System (ADS)
Li, G. Q.; Zhu, Z. H.
2015-12-01
Dynamic modeling of tethered spacecraft with the consideration of elasticity of tether is prone to the numerical instability and error accumulation over long-term numerical integration. This paper addresses the challenges by proposing a globally stable numerical approach with the nodal position finite element method (NPFEM) and the implicit, symplectic, 2-stage and 4th order Gaussian-Legendre Runge-Kutta time integration. The NPFEM eliminates the numerical error accumulation by using the position instead of displacement of tether as the state variable, while the symplectic integration enforces the energy and momentum conservation of the discretized finite element model to ensure the global stability of numerical solution. The effectiveness and robustness of the proposed approach is assessed by an elastic pendulum problem, whose dynamic response resembles that of tethered spacecraft, in comparison with the commonly used time integrators such as the classical 4th order Runge-Kutta schemes and other families of non-symplectic Runge-Kutta schemes. Numerical results show that the proposed approach is accurate and the energy of the corresponding numerical model is conservative over the long-term numerical integration. Finally, the proposed approach is applied to the dynamic modeling of deorbiting process of tethered spacecraft over a long period.
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
ACCURATE ORBITAL INTEGRATION OF THE GENERAL THREE-BODY PROBLEM BASED ON THE D'ALEMBERT-TYPE SCHEME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minesaki, Yukitaka
2013-03-15
We propose an accurate orbital integration scheme for the general three-body problem that retains all conserved quantities except angular momentum. The scheme is provided by an extension of the d'Alembert-type scheme for constrained autonomous Hamiltonian systems. Although the proposed scheme is merely second-order accurate, it can precisely reproduce some periodic, quasiperiodic, and escape orbits. The Levi-Civita transformation plays a role in designing the scheme.
Integrating funds for health and social care: an evidence review.
Mason, Anne; Goddard, Maria; Weatherly, Helen; Chalkley, Martin
2015-07-01
Integrated funds for health and social care are one possible way of improving care for people with complex care requirements. If integrated funds facilitate coordinated care, this could support improvements in patient experience, and health and social care outcomes, reduce avoidable hospital admissions and delayed discharges, and so reduce costs. In this article, we examine whether this potential has been realized in practice. We propose a framework based on agency theory for understanding the role that integrated funding can play in promoting coordinated care, and review the evidence to see whether the expected effects are realized in practice. We searched eight electronic databases and relevant websites, and checked reference lists of reviews and empirical studies. We extracted data on the types of funding integration used by schemes, their benefits and costs (including unintended effects), and the barriers to implementation. We interpreted our findings with reference to our framework. The review included 38 schemes from eight countries. Most of the randomized evidence came from Australia, with nonrandomized comparative evidence available from Australia, Canada, England, Sweden and the US. None of the comparative evidence isolated the effect of integrated funding; instead, studies assessed the effects of 'integrated financing plus integrated care' (i.e. 'integration') relative to usual care. Most schemes (24/38) assessed health outcomes, of which over half found no significant impact on health. The impact of integration on secondary care costs or use was assessed in 34 schemes. In 11 schemes, integration had no significant effect on secondary care costs or utilisation. Only three schemes reported significantly lower secondary care use compared with usual care. In the remaining 19 schemes, the evidence was mixed or unclear. Some schemes achieved short-term reductions in delayed discharges, but there was anecdotal evidence of unintended consequences such as premature hospital discharge and heightened risk of readmission. No scheme achieved a sustained reduction in hospital use. The primary barrier was the difficulty of implementing financial integration, despite the existence of statutory and regulatory support. Even where funds were successfully pooled, budget holders' control over access to services remained limited. Barriers in the form of differences in performance frameworks, priorities and governance were prominent amongst the UK schemes, whereas difficulties in linking different information systems were more widespread. Despite these barriers, many schemes - including those that failed to improve health or reduce costs - reported that access to care had improved. Some of these schemes revealed substantial levels of unmet need and so total costs increased. It is often assumed in policy that integrating funding will promote integrated care, and lead to better health outcomes and lower costs. Both our agency theory-based framework and the evidence indicate that the link is likely to be weak. Integrated care may uncover unmet need. Resolving this can benefit both individuals and society, but total care costs are likely to rise. Provided that integration delivers improvements in quality of life, even with additional costs, it may, nonetheless, offer value for money. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Ku, Seung-Hoe; Hager, R.; Chang, C. S.; Chacon, L.; Chen, G.; EPSI Team
2016-10-01
The cancelation problem has been a long-standing issue for long wavelengths modes in electromagnetic gyrokinetic PIC simulations in toroidal geometry. As an attempt of resolving this issue, we implemented a fully implicit time integration scheme in the full-f, gyrokinetic PIC code XGC1. The new scheme - based on the implicit Vlasov-Darwin PIC algorithm by G. Chen and L. Chacon - can potentially resolve cancelation problem. The time advance for the field and the particle equations is space-time-centered, with particle sub-cycling. The resulting system of equations is solved by a Picard iteration solver with fixed-point accelerator. The algorithm is implemented in the parallel velocity formalism instead of the canonical parallel momentum formalism. XGC1 specializes in simulating the tokamak edge plasma with magnetic separatrix geometry. A fully implicit scheme could be a way to accurate and efficient gyrokinetic simulations. We will test if this numerical scheme overcomes the cancelation problem, and reproduces the dispersion relation of Alfven waves and tearing modes in cylindrical geometry. Funded by US DOE FES and ASCR, and computing resources provided by OLCF through ALCC.
Analyzing Dynamics of Cooperating Spacecraft
NASA Technical Reports Server (NTRS)
Hughes, Stephen P.; Folta, David C.; Conway, Darrel J.
2004-01-01
A software library has been developed to enable high-fidelity computational simulation of the dynamics of multiple spacecraft distributed over a region of outer space and acting with a common purpose. All of the modeling capabilities afforded by this software are available independently in other, separate software systems, but have not previously been brought together in a single system. A user can choose among several dynamical models, many high-fidelity environment models, and several numerical-integration schemes. The user can select whether to use models that assume weak coupling between spacecraft, or strong coupling in the case of feedback control or tethering of spacecraft to each other. For weak coupling, spacecraft orbits are propagated independently, and are synchronized in time by controlling the step size of the integration. For strong coupling, the orbits are integrated simultaneously. Among the integration schemes that the user can choose are Runge-Kutta Verner, Prince-Dormand, Adams-Bashforth-Moulton, and Bulirsh- Stoer. Comparisons of performance are included for both the weak- and strongcoupling dynamical models for all of the numerical integrators.
On the Path Integral in Non-Commutative (nc) Qft
NASA Astrophysics Data System (ADS)
Dehne, Christoph
2008-09-01
As is generally known, different quantization schemes applied to field theory on NC spacetime lead to Feynman rules with different physical properties, if time does not commute with space. In particular, the Feynman rules that are derived from the path integral corresponding to the T*-product (the so-called naïve Feynman rules) violate the causal time ordering property. Within the Hamiltonian approach to quantum field theory, we show that we can (formally) modify the time ordering encoded in the above path integral. The resulting Feynman rules are identical to those obtained in the canonical approach via the Gell-Mann-Low formula (with T-ordering). They preserve thus unitarity and causal time ordering.
Chen, Hung-Ming; Lo, Jung-Wen; Yeh, Chang-Kuo
2012-12-01
The rapidly increased availability of always-on broadband telecommunication environments and lower-cost vital signs monitoring devices bring the advantages of telemedicine directly into the patient's home. Hence, the control of access to remote medical servers' resources has become a crucial challenge. A secure authentication scheme between the medical server and remote users is therefore needed to safeguard data integrity, confidentiality and to ensure availability. Recently, many authentication schemes that use low-cost mobile devices have been proposed to meet these requirements. In contrast to previous schemes, Khan et al. proposed a dynamic ID-based remote user authentication scheme that reduces computational complexity and includes features such as a provision for the revocation of lost or stolen smart cards and a time expiry check for the authentication process. However, Khan et al.'s scheme has some security drawbacks. To remedy theses, this study proposes an enhanced authentication scheme that overcomes the weaknesses inherent in Khan et al.'s scheme and demonstrated this scheme is more secure and robust for use in a telecare medical information system.
Integrating funds for health and social care: an evidence review
Goddard, Maria; Weatherly, Helen; Chalkley, Martin
2015-01-01
Objectives Integrated funds for health and social care are one possible way of improving care for people with complex care requirements. If integrated funds facilitate coordinated care, this could support improvements in patient experience, and health and social care outcomes, reduce avoidable hospital admissions and delayed discharges, and so reduce costs. In this article, we examine whether this potential has been realized in practice. Methods We propose a framework based on agency theory for understanding the role that integrated funding can play in promoting coordinated care, and review the evidence to see whether the expected effects are realized in practice. We searched eight electronic databases and relevant websites, and checked reference lists of reviews and empirical studies. We extracted data on the types of funding integration used by schemes, their benefits and costs (including unintended effects), and the barriers to implementation. We interpreted our findings with reference to our framework. Results The review included 38 schemes from eight countries. Most of the randomized evidence came from Australia, with nonrandomized comparative evidence available from Australia, Canada, England, Sweden and the US. None of the comparative evidence isolated the effect of integrated funding; instead, studies assessed the effects of ‘integrated financing plus integrated care’ (i.e. ‘integration’) relative to usual care. Most schemes (24/38) assessed health outcomes, of which over half found no significant impact on health. The impact of integration on secondary care costs or use was assessed in 34 schemes. In 11 schemes, integration had no significant effect on secondary care costs or utilisation. Only three schemes reported significantly lower secondary care use compared with usual care. In the remaining 19 schemes, the evidence was mixed or unclear. Some schemes achieved short-term reductions in delayed discharges, but there was anecdotal evidence of unintended consequences such as premature hospital discharge and heightened risk of readmission. No scheme achieved a sustained reduction in hospital use. The primary barrier was the difficulty of implementing financial integration, despite the existence of statutory and regulatory support. Even where funds were successfully pooled, budget holders’ control over access to services remained limited. Barriers in the form of differences in performance frameworks, priorities and governance were prominent amongst the UK schemes, whereas difficulties in linking different information systems were more widespread. Despite these barriers, many schemes – including those that failed to improve health or reduce costs – reported that access to care had improved. Some of these schemes revealed substantial levels of unmet need and so total costs increased. Conclusions It is often assumed in policy that integrating funding will promote integrated care, and lead to better health outcomes and lower costs. Both our agency theory-based framework and the evidence indicate that the link is likely to be weak. Integrated care may uncover unmet need. Resolving this can benefit both individuals and society, but total care costs are likely to rise. Provided that integration delivers improvements in quality of life, even with additional costs, it may, nonetheless, offer value for money. PMID:25595287
USDA-ARS?s Scientific Manuscript database
The performance of conventional filtering methods can be degraded by ignoring the time lag between soil moisture and discharge response when discharge observations are assimilated into streamflow modelling. This has led to the ongoing development of more optimal ways to implement sequential data ass...
NASA Astrophysics Data System (ADS)
Ushaq, Muhammad; Fang, Jiancheng
2013-10-01
Integrated navigation systems for various applications, generally employs the centralized Kalman filter (CKF) wherein all measured sensor data are communicated to a single central Kalman filter. The advantage of CKF is that there is a minimal loss of information and high precision under benign conditions. But CKF may suffer computational overloading, and poor fault tolerance. The alternative is the federated Kalman filter (FKF) wherein the local estimates can deliver optimal or suboptimal state estimate as per certain information fusion criterion. FKF has enhanced throughput and multiple level fault detection capability. The Standard CKF or FKF require that the system noise and the measurement noise are zero-mean and Gaussian. Moreover it is assumed that covariance of system and measurement noises remain constant. But if the theoretical and actual statistical features employed in Kalman filter are not compatible, the Kalman filter does not render satisfactory solutions and divergence problems also occur. To resolve such problems, in this paper, an adaptive Kalman filter scheme strengthened with fuzzy inference system (FIS) is employed to adapt the statistical features of contributing sensors, online, in the light of real system dynamics and varying measurement noises. The excessive faults are detected and isolated by employing Chi Square test method. As a case study, the presented scheme has been implemented on Strapdown Inertial Navigation System (SINS) integrated with the Celestial Navigation System (CNS), GPS and Doppler radar using FKF. Collectively the overall system can be termed as SINS/CNS/GPS/Doppler integrated navigation system. The simulation results have validated the effectiveness of the presented scheme with significantly enhanced precision, reliability and fault tolerance. Effectiveness of the scheme has been tested against simulated abnormal errors/noises during different time segments of flight. It is believed that the presented scheme can be applied to the navigation system of aircraft or unmanned aerial vehicle (UAV).
Asymptotic analysis of discrete schemes for non-equilibrium radiation diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xia, E-mail: cui_xia@iapcm.ac.cn; Yuan, Guang-wei; Shen, Zhi-jun
Motivated by providing well-behaved fully discrete schemes in practice, this paper extends the asymptotic analysis on time integration methods for non-equilibrium radiation diffusion in [2] to space discretizations. Therein studies were carried out on a two-temperature model with Larsen's flux-limited diffusion operator, both the implicitly balanced (IB) and linearly implicit (LI) methods were shown asymptotic-preserving. In this paper, we focus on asymptotic analysis for space discrete schemes in dimensions one and two. First, in construction of the schemes, in contrast to traditional first-order approximations, asymmetric second-order accurate spatial approximations are devised for flux-limiters on boundary, and discrete schemes with second-ordermore » accuracy on global spatial domain are acquired consequently. Then by employing formal asymptotic analysis, the first-order asymptotic-preserving property for these schemes and furthermore for the fully discrete schemes is shown. Finally, with the help of manufactured solutions, numerical tests are performed, which demonstrate quantitatively the fully discrete schemes with IB time evolution indeed have the accuracy and asymptotic convergence as theory predicts, hence are well qualified for both non-equilibrium and equilibrium radiation diffusion. - Highlights: • Provide AP fully discrete schemes for non-equilibrium radiation diffusion. • Propose second order accurate schemes by asymmetric approach for boundary flux-limiter. • Show first order AP property of spatially and fully discrete schemes with IB evolution. • Devise subtle artificial solutions; verify accuracy and AP property quantitatively. • Ideas can be generalized to 3-dimensional problems and higher order implicit schemes.« less
Time-of-flight depth image enhancement using variable integration time
NASA Astrophysics Data System (ADS)
Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong
2013-03-01
Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.
Numerical time-domain electromagnetics based on finite-difference and convolution
NASA Astrophysics Data System (ADS)
Lin, Yuanqu
Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1980-01-01
New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.
Multigrid time-accurate integration of Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1993-01-01
Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.
Finite Volume Methods: Foundation and Analysis
NASA Technical Reports Server (NTRS)
Barth, Timothy; Ohlberger, Mario
2003-01-01
Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.
NASA Astrophysics Data System (ADS)
Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.
2010-03-01
Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species with practical computer time.
a Study of Multiplexing Schemes for Voice and Data.
NASA Astrophysics Data System (ADS)
Sriram, Kotikalapudi
Voice traffic variations are characterized by on/off transitions of voice calls, and talkspurt/silence transitions of speakers in conversations. A speaker is known to be in silence for more than half the time during a telephone conversation. In this dissertation, we study some schemes which exploit speaker silences for an efficient utilization of the transmission capacity in integrated voice/data multiplexing and in digital speech interpolation. We study two voice/data multiplexing schemes. In each scheme, any time slots momentarily unutilized by the voice traffic are made available to data. In the first scheme, the multiplexer does not use speech activity detectors (SAD), and hence the voice traffic variations are due to call on/off only. In the second scheme, the multiplexer detects speaker silences using SAD and transmits voice only during talkspurts. The multiplexer with SAD performs digital speech interpolation (DSI) as well as dynamic channel allocation to voice and data. The performance of the two schemes is evaluated using discrete-time modeling and analysis. The data delay performance for the case of English speech is compared with that for the case of Japanese speech. A closed form expression for the mean data message delay is derived for the single-channel single-talker case. In a DSI system, occasional speech losses occur whenever the number of speakers in simultaneous talkspurt exceeds the number of TDM voice channels. In a buffered DSI system, speech loss is further reduced at the cost of delay. We propose a novel fixed-delay buffered DSI scheme. In this scheme, speech fill-in/hangover is not required because there are no variable delays. Hence, all silences that naturally occur in speech are fully utilized. Consequently, a substantial improvement in the DSI performance is made possible. The scheme is modeled and analyzed in discrete -time. Its performance is evaluated in terms of the probability of speech clipping, packet rejection ratio, DSI advantage, and the delay.
NASA Astrophysics Data System (ADS)
Glazyrina, O. V.; Pavlova, M. F.
2016-11-01
We consider the parabolic inequality with monotone with respect to a gradient space operator, which is depended on integral with respect to space variables solution characteristic. We construct a two-layer differential scheme for this problem with use of penalty method, semidiscretization with respect to time variable method and the finite element method (FEM) with respect to space variables. We proved a convergence of constructed mothod.
Universal fuzzy integral sliding-mode controllers for stochastic nonlinear systems.
Gao, Qing; Liu, Lu; Feng, Gang; Wang, Yong
2014-12-01
In this paper, the universal integral sliding-mode controller problem for the general stochastic nonlinear systems modeled by Itô type stochastic differential equations is investigated. One of the main contributions is that a novel dynamic integral sliding mode control (DISMC) scheme is developed for stochastic nonlinear systems based on their stochastic T-S fuzzy approximation models. The key advantage of the proposed DISMC scheme is that two very restrictive assumptions in most existing ISMC approaches to stochastic fuzzy systems have been removed. Based on the stochastic Lyapunov theory, it is shown that the closed-loop control system trajectories are kept on the integral sliding surface almost surely since the initial time, and moreover, the stochastic stability of the sliding motion can be guaranteed in terms of linear matrix inequalities. Another main contribution is that the results of universal fuzzy integral sliding-mode controllers for two classes of stochastic nonlinear systems, along with constructive procedures to obtain the universal fuzzy integral sliding-mode controllers, are provided, respectively. Simulation results from an inverted pendulum example are presented to illustrate the advantages and effectiveness of the proposed approaches.
SIMULATING ATMOSPHERIC EXPOSURE USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME
Multimedia Risk assessments require the temporal integration of atmospheric concentration and deposition estimates with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-ter...
Comparison of two integration methods for dynamic causal modeling of electrophysiological data.
Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier
2018-06-01
Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
Pernice, W H; Payne, F P; Gallagher, D F
2007-09-03
We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.
Evaluation of the transport matrix method for simulation of ocean biogeochemical tracers
NASA Astrophysics Data System (ADS)
Kvale, Karin F.; Khatiwala, Samar; Dietze, Heiner; Kriest, Iris; Oschlies, Andreas
2017-06-01
Conventional integration of Earth system and ocean models can accrue considerable computational expenses, particularly for marine biogeochemical applications. Offline
numerical schemes in which only the biogeochemical tracers are time stepped and transported using a pre-computed circulation field can substantially reduce the burden and are thus an attractive alternative. One such scheme is the transport matrix method
(TMM), which represents tracer transport as a sequence of sparse matrix-vector products that can be performed efficiently on distributed-memory computers. While the TMM has been used for a variety of geochemical and biogeochemical studies, to date the resulting solutions have not been comprehensively assessed against their online
counterparts. Here, we present a detailed comparison of the two. It is based on simulations of the state-of-the-art biogeochemical sub-model embedded within the widely used coarse-resolution University of Victoria Earth System Climate Model (UVic ESCM). The default, non-linear advection scheme was first replaced with a linear, third-order upwind-biased advection scheme to satisfy the linearity requirement of the TMM. Transport matrices were extracted from an equilibrium run of the physical model and subsequently used to integrate the biogeochemical model offline to equilibrium. The identical biogeochemical model was also run online. Our simulations show that offline integration introduces some bias to biogeochemical quantities through the omission of the polar filtering used in UVic ESCM and in the offline application of time-dependent forcing fields, with high latitudes showing the largest differences with respect to the online model. Differences in other regions and in the seasonality of nutrients and phytoplankton distributions are found to be relatively minor, giving confidence that the TMM is a reliable tool for offline integration of complex biogeochemical models. Moreover, while UVic ESCM is a serial code, the TMM can be run on a parallel machine with no change to the underlying biogeochemical code, thus providing orders of magnitude speed-up over the online model.
Designing Adaptive Low-Dissipative High Order Schemes for Long-Time Integrations. Chapter 1
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sjoegreen, B.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A general framework for the design of adaptive low-dissipative high order schemes is presented. It encompasses a rather complete treatment of the numerical approach based on four integrated design criteria: (1) For stability considerations, condition the governing equations before the application of the appropriate numerical scheme whenever it is possible; (2) For consistency, compatible schemes that possess stability properties, including physical and numerical boundary condition treatments, similar to those of the discrete analogue of the continuum are preferred; (3) For the minimization of numerical dissipation contamination, efficient and adaptive numerical dissipation control to further improve nonlinear stability and accuracy should be used; and (4) For practical considerations, the numerical approach should be efficient and applicable to general geometries, and an efficient and reliable dynamic grid adaptation should be used if necessary. These design criteria are, in general, very useful to a wide spectrum of flow simulations. However, the demand on the overall numerical approach for nonlinear stability and accuracy is much more stringent for long-time integration of complex multiscale viscous shock/shear/turbulence/acoustics interactions and numerical combustion. Robust classical numerical methods for less complex flow physics are not suitable or practical for such applications. The present approach is designed expressly to address such flow problems, especially unsteady flows. The minimization of employing very fine grids to overcome the production of spurious numerical solutions and/or instability due to under-resolved grids is also sought. The incremental studies to illustrate the performance of the approach are summarized. Extensive testing and full implementation of the approach is forthcoming. The results shown so far are very encouraging.
Das, Ashok Kumar
2015-03-01
An integrated EPR (Electronic Patient Record) information system of all the patients provides the medical institutions and the academia with most of the patients' information in details for them to make corrective decisions and clinical decisions in order to maintain and analyze patients' health. In such system, the illegal access must be restricted and the information from theft during transmission over the insecure Internet must be prevented. Lee et al. proposed an efficient password-based remote user authentication scheme using smart card for the integrated EPR information system. Their scheme is very efficient due to usage of one-way hash function and bitwise exclusive-or (XOR) operations. However, in this paper, we show that though their scheme is very efficient, their scheme has three security weaknesses such as (1) it has design flaws in password change phase, (2) it fails to protect privileged insider attack and (3) it lacks the formal security verification. We also find that another recently proposed Wen's scheme has the same security drawbacks as in Lee at al.'s scheme. In order to remedy these security weaknesses found in Lee et al.'s scheme and Wen's scheme, we propose a secure and efficient password-based remote user authentication scheme using smart cards for the integrated EPR information system. We show that our scheme is also efficient as compared to Lee et al.'s scheme and Wen's scheme as our scheme only uses one-way hash function and bitwise exclusive-or (XOR) operations. Through the security analysis, we show that our scheme is secure against possible known attacks. Furthermore, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks.
Integrable high order UWB pulse photonic generator based on cross phase modulation in a SOA-MZI.
Moreno, Vanessa; Rius, Manuel; Mora, José; Muriel, Miguel A; Capmany, José
2013-09-23
We propose and experimentally demonstrate a potentially integrable optical scheme to generate high order UWB pulses. The technique is based on exploiting the cross phase modulation generated in an InGaAsP Mach-Zehnder interferometer containing integrated semiconductor optical amplifiers, and is also adaptable to different pulse modulation formats through an optical processing unit which allows to control of the amplitude, polarity and time delay of the generated taps.
SIMULATIONS OF 2D AND 3D THERMOCAPILLARY FLOWS BY A LEAST-SQUARES FINITE ELEMENT METHOD. (R825200)
Numerical results for time-dependent 2D and 3D thermocapillary flows are presented in this work. The numerical algorithm is based on the Crank-Nicolson scheme for time integration, Newton's method for linearization, and a least-squares finite element method, together with a matri...
Efficient variable time-stepping scheme for intense field-atom interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, C.; Kosloff, R.
1993-03-01
The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ju, E-mail: jliu@ices.utexas.edu; Gomez, Hector; Evans, John A.
2013-09-01
We propose a new methodology for the numerical solution of the isothermal Navier–Stokes–Korteweg equations. Our methodology is based on a semi-discrete Galerkin method invoking functional entropy variables, a generalization of classical entropy variables, and a new time integration scheme. We show that the resulting fully discrete scheme is unconditionally stable-in-energy, second-order time-accurate, and mass-conservative. We utilize isogeometric analysis for spatial discretization and verify the aforementioned properties by adopting the method of manufactured solutions and comparing coarse mesh solutions with overkill solutions. Various problems are simulated to show the capability of the method. Our methodology provides a means of constructing unconditionallymore » stable numerical schemes for nonlinear non-convex hyperbolic systems of conservation laws.« less
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2013-07-25
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01% can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2014-03-01
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01 percent can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
NASA Astrophysics Data System (ADS)
Xing, Yanyuan; Yan, Yubin
2018-03-01
Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu [20] (2016), where k is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 as in Gao et al. [11] (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 < α < 1 for any fixed tn > 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.
Hao, Li-Ying; Park, Ju H; Ye, Dan
2017-09-01
In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
Strategy for reflector pattern calculation - Let the computer do the work
NASA Technical Reports Server (NTRS)
Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.
1986-01-01
Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.
Strategy for reflector pattern calculation: Let the computer do the work
NASA Technical Reports Server (NTRS)
Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.
1985-01-01
Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.
JANUS: a bit-wise reversible integrator for N-body dynamics
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2018-01-01
Hamiltonian systems such as the gravitational N-body problem have time-reversal symmetry. However, all numerical N-body integration schemes, including symplectic ones, respect this property only approximately. In this paper, we present the new N-body integrator JANUS , for which we achieve exact time-reversal symmetry by combining integer and floating point arithmetic. JANUS is explicit, formally symplectic and satisfies Liouville's theorem exactly. Its order is even and can be adjusted between two and ten. We discuss the implementation of JANUS and present tests of its accuracy and speed by performing and analysing long-term integrations of the Solar system. We show that JANUS is fast and accurate enough to tackle a broad class of dynamical problems. We also discuss the practical and philosophical implications of running exactly time-reversible simulations.
NASA Technical Reports Server (NTRS)
Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.
1992-01-01
In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the space shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the space shuttle main engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
TripSense: A Trust-Based Vehicular Platoon Crowdsensing Scheme with Privacy Preservation in VANETs
Hu, Hao; Lu, Rongxing; Huang, Cheng; Zhang, Zonghua
2016-01-01
In this paper, we propose a trust-based vehicular platoon crowdsensing scheme, named TripSense, in VANET. The proposed TripSense scheme introduces a trust-based system to evaluate vehicles’ sensing abilities and then selects the more capable vehicles in order to improve sensing results accuracy. In addition, the sensing tasks are accomplished by platoon member vehicles and preprocessed by platoon head vehicles before the data are uploaded to server. Hence, it is less time-consuming and more efficient compared with the way where the data are submitted by individual platoon member vehicles. Hence it is more suitable in ephemeral networks like VANET. Moreover, our proposed TripSense scheme integrates unlinkable pseudo-ID techniques to achieve PM vehicle identity privacy, and employs a privacy-preserving sensing vehicle selection scheme without involving the PM vehicle’s trust score to keep its location privacy. Detailed security analysis shows that our proposed TripSense scheme not only achieves desirable privacy requirements but also resists against attacks launched by adversaries. In addition, extensive simulations are conducted to show the correctness and effectiveness of our proposed scheme. PMID:27258287
Towards information-optimal simulation of partial differential equations.
Leike, Reimar H; Enßlin, Torsten A
2018-03-01
Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.
Kedziora, D J; Ankiewicz, A; Chowdury, A; Akhmediev, N
2015-10-01
We present an infinite nonlinear Schrödinger equation hierarchy of integrable equations, together with the recurrence relations defining it. To demonstrate integrability, we present the Lax pairs for the whole hierarchy, specify its Darboux transformations and provide several examples of solutions. These resulting wavefunctions are given in exact analytical form. We then show that the Lax pair and Darboux transformation formalisms still apply in this scheme when the coefficients in the hierarchy depend on the propagation variable (e.g., time). This extension thus allows for the construction of complicated solutions within a greatly diversified domain of generalised nonlinear systems.
Hou, Chieh; Ateshian, Gerard A.
2015-01-01
Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation. PMID:26291492
Hou, Chieh; Ateshian, Gerard A
2016-01-01
Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element (FE) analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation.
FDTD simulation of EM wave propagation in 3-D media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T.; Tripp, A.C.
1996-01-01
A finite-difference, time-domain solution to Maxwell`s equations has been developed for simulating electromagnetic wave propagation in 3-D media. The algorithm allows arbitrary electrical conductivity and permittivity variations within a model. The staggered grid technique of Yee is used to sample the fields. A new optimized second-order difference scheme is designed to approximate the spatial derivatives. Like the conventional fourth-order difference scheme, the optimized second-order scheme needs four discrete values to calculate a single derivative. However, the optimized scheme is accurate over a wider wavenumber range. Compared to the fourth-order scheme, the optimized scheme imposes stricter limitations on the time stepmore » sizes but allows coarser grids. The net effect is that the optimized scheme is more efficient in terms of computation time and memory requirement than the fourth-order scheme. The temporal derivatives are approximated by second-order central differences throughout. The Liao transmitting boundary conditions are used to truncate an open problem. A reflection coefficient analysis shows that this transmitting boundary condition works very well. However, it is subject to instability. A method that can be easily implemented is proposed to stabilize the boundary condition. The finite-difference solution is compared to closed-form solutions for conducting and nonconducting whole spaces and to an integral-equation solution for a 3-D body in a homogeneous half-space. In all cases, the finite-difference solutions are in good agreement with the other solutions. Finally, the use of the algorithm is demonstrated with a 3-D model. Numerical results show that both the magnetic field response and electric field response can be useful for shallow-depth and small-scale investigations.« less
Three-dimensional unstructured grid Euler computations using a fully-implicit, upwind method
NASA Technical Reports Server (NTRS)
Whitaker, David L.
1993-01-01
A method has been developed to solve the Euler equations on a three-dimensional unstructured grid composed of tetrahedra. The method uses an upwind flow solver with a linearized, backward-Euler time integration scheme. Each time step results in a sparse linear system of equations which is solved by an iterative, sparse matrix solver. Local-time stepping, switched evolution relaxation (SER), preconditioning and reuse of the Jacobian are employed to accelerate the convergence rate. Implicit boundary conditions were found to be extremely important for fast convergence. Numerical experiments have shown that convergence rates comparable to that of a multigrid, central-difference scheme are achievable on the same mesh. Results are presented for several grids about an ONERA M6 wing.
NASA Astrophysics Data System (ADS)
Begović, Slaven; Ranero, César; Sallarès, Valentí; Meléndez, Adrià; Grevemeyer, Ingo
2016-04-01
Commonly multichannel seismic reflection (MCS) and wide-angle seismic (WAS) data are modeled and interpreted with different approaches. Conventional travel-time tomography models using solely WAS data lack the resolution to define the model properties and, particularly, the geometry of geologic boundaries (reflectors) with the required accuracy, specially in the shallow complex upper geological layers. We plan to mitigate this issue by combining these two different data sets, specifically taking advantage of the high redundancy of multichannel seismic (MCS) data, integrated with wide-angle seismic (WAS) data into a common inversion scheme to obtain higher-resolution velocity models (Vp), decrease Vp uncertainty and improve the geometry of reflectors. To do so, we have adapted the tomo2d and tomo3d joint refraction and reflection travel time tomography codes (Korenaga et al, 2000; Meléndez et al, 2015) to deal with streamer data and MCS acquisition geometries. The scheme results in a joint travel-time tomographic inversion based on integrated travel-time information from refracted and reflected phases from WAS data and reflected identified in the MCS common depth point (CDP) or shot gathers. To illustrate the advantages of a common inversion approach we have compared the modeling results for synthetic data sets using two different travel-time inversion strategies: We have produced seismic velocity models and reflector geometries following typical refraction and reflection travel-time tomographic strategy modeling just WAS data with a typical acquisition geometry (one OBS each 10 km). Second, we performed joint inversion of two types of seismic data sets, integrating two coincident data sets consisting of MCS data collected with a 8 km-long streamer and the WAS data into a common inversion scheme. Our synthetic results of the joint inversion indicate a 5-10 times smaller ray travel-time misfit in the deeper parts of the model, compared to models obtained using just wide-angle seismic data. As expected, there is an important improvement in the definition of the reflector geometry, which in turn, allows to improve the accuracy of the velocity retrieval just above and below the reflector. To test the joint inversion approach with real data, we combined wide-angle (WAS) seismic and coincident multichannel seismic reflection (MCS) data acquired in the northern Chile subduction zone into a common inversion scheme to obtain a higher-resolution information of upper plate and inter-plate boundary.
Multigrid Acceleration of Time-Accurate DNS of Compressible Turbulent Flow
NASA Technical Reports Server (NTRS)
Broeze, Jan; Geurts, Bernard; Kuerten, Hans; Streng, Martin
1996-01-01
An efficient scheme for the direct numerical simulation of 3D transitional and developed turbulent flow is presented. Explicit and implicit time integration schemes for the compressible Navier-Stokes equations are compared. The nonlinear system resulting from the implicit time discretization is solved with an iterative method and accelerated by the application of a multigrid technique. Since we use central spatial discretizations and no artificial dissipation is added to the equations, the smoothing method is less effective than in the more traditional use of multigrid in steady-state calculations. Therefore, a special prolongation method is needed in order to obtain an effective multigrid method. This simulation scheme was studied in detail for compressible flow over a flat plate. In the laminar regime and in the first stages of turbulent flow the implicit method provides a speed-up of a factor 2 relative to the explicit method on a relatively coarse grid. At increased resolution this speed-up is enhanced correspondingly.
Performance Analysis of Transmit Diversity Systems with Multiple Antenna Replacement
NASA Astrophysics Data System (ADS)
Park, Ki-Hong; Yang, Hong-Chuan; Ko, Young-Chai
Transmit diversity systems based on orthogonal space-time block coding (OSTBC) usually suffer from rate loss and power spreading. Proper antenna selection scheme can help to more effectively utilize the transmit antennas and transmission power in such systems. In this paper, we propose a new antenna selection scheme for such systems based on the idea of antenna switching. In particular, targeting at reducing the number of pilot channels and RF chains, the transmitter now replaces the antennas with the lowest received SNR with unused ones if the output SNR of space time decoder at the receiver is below a certain threshold. With this new scheme, not only the number of pilot channels and RF chains to be implemented is decreased, the average amount of feedback information is also reduced. To analyze the performance of this scheme, we derive the exact integral closed form for the probability density function (PDF) of the received SNR. We show through numerical examples that the proposed scheme offers better performance than traditional OSTBC systems using all available transmitting antennas, with a small amount of feedback information. We also examine the effect of different antenna configuration and feedback delay.
Robust Stabilization of T-S Fuzzy Stochastic Descriptor Systems via Integral Sliding Modes.
Li, Jinghao; Zhang, Qingling; Yan, Xing-Gang; Spurgeon, Sarah K
2017-09-19
This paper addresses the robust stabilization problem for T-S fuzzy stochastic descriptor systems using an integral sliding mode control paradigm. A classical integral sliding mode control scheme and a nonparallel distributed compensation (Non-PDC) integral sliding mode control scheme are presented. It is shown that two restrictive assumptions previously adopted developing sliding mode controllers for Takagi-Sugeno (T-S) fuzzy stochastic systems are not required with the proposed framework. A unified framework for sliding mode control of T-S fuzzy systems is formulated. The proposed Non-PDC integral sliding mode control scheme encompasses existing schemes when the previously imposed assumptions hold. Stability of the sliding motion is analyzed and the sliding mode controller is parameterized in terms of the solutions of a set of linear matrix inequalities which facilitates design. The methodology is applied to an inverted pendulum model to validate the effectiveness of the results presented.
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Yang, Lie
2018-05-01
To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.
Lu, Jiazhen; Yang, Lie
2018-05-01
To achieve accurate and completely autonomous navigation for spacecraft, inertial/celestial integrated navigation gets increasing attention. In this study, a missile-borne inertial/stellar refraction integrated navigation scheme is proposed. Position Dilution of Precision (PDOP) for stellar refraction is introduced and the corresponding equation is derived. Based on the condition when PDOP reaches the minimum value, an optimized observation scheme is proposed. To verify the feasibility of the proposed scheme, numerical simulation is conducted. The results of the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) are compared and impact factors of navigation accuracy are studied in the simulation. The simulation results indicated that the proposed observation scheme has an accurate positioning performance, and the results of EKF and UKF are similar.
Multimedia risk assessments require the temporal integration of atmospheric concentration and deposition with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-term average a...
A Legendre tau-spectral method for solving time-fractional heat equation with nonlocal conditions.
Bhrawy, A H; Alghamdi, M A
2014-01-01
We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem.
A Legendre tau-Spectral Method for Solving Time-Fractional Heat Equation with Nonlocal Conditions
Bhrawy, A. H.; Alghamdi, M. A.
2014-01-01
We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem. PMID:25057507
Image communication scheme based on dynamic visual cryptography and computer generated holography
NASA Astrophysics Data System (ADS)
Palevicius, Paulius; Ragulskis, Minvydas
2015-01-01
Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.
NASA Astrophysics Data System (ADS)
Veerapaneni, Shravan K.; Gueyffier, Denis; Biros, George; Zorin, Denis
2009-10-01
We extend [Shravan K. Veerapaneni, Denis Gueyffier, Denis Zorin, George Biros, A boundary integral method for simulating the dynamics of inextensible vesicles suspended in a viscous fluid in 2D, Journal of Computational Physics 228(7) (2009) 2334-2353] to the case of three-dimensional axisymmetric vesicles of spherical or toroidal topology immersed in viscous flows. Although the main components of the algorithm are similar in spirit to the 2D case—spectral approximation in space, semi-implicit time-stepping scheme—the main differences are that the bending and viscous force require new analysis, the linearization for the semi-implicit schemes must be rederived, a fully implicit scheme must be used for the toroidal topology to eliminate a CFL-type restriction and a novel numerical scheme for the evaluation of the 3D Stokes single layer potential on an axisymmetric surface is necessary to speed up the calculations. By introducing these novel components, we obtain a time-scheme that experimentally is unconditionally stable, has low cost per time step, and is third-order accurate in time. We present numerical results to analyze the cost and convergence rates of the scheme. To verify the solver, we compare it to a constrained variational approach to compute equilibrium shapes that does not involve interactions with a viscous fluid. To illustrate the applicability of method, we consider a few vesicle-flow interaction problems: the sedimentation of a vesicle, interactions of one and three vesicles with a background Poiseuille flow.
Stochastic Ocean Predictions with Dynamically-Orthogonal Primitive Equations
NASA Astrophysics Data System (ADS)
Subramani, D. N.; Haley, P., Jr.; Lermusiaux, P. F. J.
2017-12-01
The coastal ocean is a prime example of multiscale nonlinear fluid dynamics. Ocean fields in such regions are complex and intermittent with unstationary heterogeneous statistics. Due to the limited measurements, there are multiple sources of uncertainties, including the initial conditions, boundary conditions, forcing, parameters, and even the model parameterizations and equations themselves. For efficient and rigorous quantification and prediction of these uncertainities, the stochastic Dynamically Orthogonal (DO) PDEs for a primitive equation ocean modeling system with a nonlinear free-surface are derived and numerical schemes for their space-time integration are obtained. Detailed numerical studies with idealized-to-realistic regional ocean dynamics are completed. These include consistency checks for the numerical schemes and comparisons with ensemble realizations. As an illustrative example, we simulate the 4-d multiscale uncertainty in the Middle Atlantic/New York Bight region during the months of Jan to Mar 2017. To provide intitial conditions for the uncertainty subspace, uncertainties in the region were objectively analyzed using historical data. The DO primitive equations were subsequently integrated in space and time. The probability distribution function (pdf) of the ocean fields is compared to in-situ, remote sensing, and opportunity data collected during the coincident POSYDON experiment. Results show that our probabilistic predictions had skill and are 3- to 4- orders of magnitude faster than classic ensemble schemes.
NASA Astrophysics Data System (ADS)
Nguyen, Tien Long; Sansour, Carlo; Hjiaj, Mohammed
2017-05-01
In this paper, an energy-momentum method for geometrically exact Timoshenko-type beam is proposed. The classical time integration schemes in dynamics are known to exhibit instability in the non-linear regime. The so-called Timoshenko-type beam with the use of rotational degree of freedom leads to simpler strain relations and simpler expressions of the inertial terms as compared to the well known Bernoulli-type model. The treatment of the Bernoulli-model has been recently addressed by the authors. In this present work, we extend our approach of using the strain rates to define the strain fields to in-plane geometrically exact Timoshenko-type beams. The large rotational degrees of freedom are exactly computed. The well-known enhanced strain method is used to avoid locking phenomena. Conservation of energy, momentum and angular momentum is proved formally and numerically. The excellent performance of the formulation will be demonstrated through a range of examples.
Adaptive mesh fluid simulations on GPU
NASA Astrophysics Data System (ADS)
Wang, Peng; Abel, Tom; Kaehler, Ralf
2010-10-01
We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.
High Speed Solution of Spacecraft Trajectory Problems Using Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2008-01-01
Taylor series integration is implemented in a spacecraft trajectory analysis code-the Spacecraft N-body Analysis Program (SNAP) - and compared with the code s existing eighth-order Runge-Kutta Fehlberg time integration scheme. Nine trajectory problems, including near Earth, lunar, Mars and Europa missions, are analyzed. Head-to-head comparison at five different error tolerances shows that, on average, Taylor series is faster than Runge-Kutta Fehlberg by a factor of 15.8. Results further show that Taylor series has superior convergence properties. Taylor series integration proves that it can provide rapid, highly accurate solutions to spacecraft trajectory problems.
A method for exponential propagation of large systems of stiff nonlinear differential equations
NASA Technical Reports Server (NTRS)
Friesner, Richard A.; Tuckerman, Laurette S.; Dornblaser, Bright C.; Russo, Thomas V.
1989-01-01
A new time integrator for large, stiff systems of linear and nonlinear coupled differential equations is described. For linear systems, the method consists of forming a small (5-15-term) Krylov space using the Jacobian of the system and carrying out exact exponential propagation within this space. Nonlinear corrections are incorporated via a convolution integral formalism; the integral is evaluated via approximate Krylov methods as well. Gains in efficiency ranging from factors of 2 to 30 are demonstrated for several test problems as compared to a forward Euler scheme and to the integration package LSODE.
Involution and Difference Schemes for the Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.
In the present paper we consider the Navier-Stokes equations for the two-dimensional viscous incompressible fluid flows and apply to these equations our earlier designed general algorithmic approach to generation of finite-difference schemes. In doing so, we complete first the Navier-Stokes equations to involution by computing their Janet basis and discretize this basis by its conversion into the integral conservation law form. Then we again complete the obtained difference system to involution with eliminating the partial derivatives and extracting the minimal Gröbner basis from the Janet basis. The elements in the obtained difference Gröbner basis that do not contain partial derivatives of the dependent variables compose a conservative difference scheme. By exploiting arbitrariness in the numerical integration approximation we derive two finite-difference schemes that are similar to the classical scheme by Harlow and Welch. Each of the two schemes is characterized by a 5×5 stencil on an orthogonal and uniform grid. We also demonstrate how an inconsistent difference scheme with a 3×3 stencil is generated by an inappropriate numerical approximation of the underlying integrals.
NASA Astrophysics Data System (ADS)
Bell, Stephen C.; Ginsburg, Marc A.; Rao, Prabhakara P.
An important part of space launch vehicle mission planning for a planetary mission is the integrated analysis of guidance and performance dispersions for both booster and upper stage vehicles. For the Mars Observer mission, an integrated trajectory analysis was used to maximize the scientific payload and to minimize injection errors by optimizing the energy management of both vehicles. This was accomplished by designing the Titan III booster vehicle to inject into a hyperbolic departure plane, and the Transfer Orbit Stage (TOS) to correct any booster dispersions. An integrated Monte Carlo analysis of the performance and guidance dispersions of both vehicles provided sensitivities, an evaluation of their guidance schemes and an injection error covariance matrix. The polynomial guidance schemes used for the Titan III variable flight azimuth computations and the TOS solid rocket motor ignition time and burn direction derivations accounted for a wide variation of launch times, performance dispersions, and target conditions. The Mars Observer spacecraft was launched on 25 September 1992 on the Titan III/TOS vehicle. The post flight analysis indicated that a near perfect park orbit injection was achieved, followed by a trans-Mars injection with less than 2sigma errors.
A Kirchhoff approach to seismic modeling and prestack depth migration
NASA Astrophysics Data System (ADS)
Liu, Zhen-Yue
1993-05-01
The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.
An Exact Integration Scheme for Radiative Cooling in Hydrodynamical Simulations
NASA Astrophysics Data System (ADS)
Townsend, R. H. D.
2009-04-01
A new scheme for incorporating radiative cooling in hydrodynamical codes is presented, centered around exact integration of the governing semidiscrete cooling equation. Using benchmark calculations based on the cooling downstream of a radiative shock, I demonstrate that the new scheme outperforms traditional explicit and implicit approaches in terms of accuracy, while remaining competitive in terms of execution speed.
Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing
NASA Technical Reports Server (NTRS)
Batina, John T.
1991-01-01
Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.
Provably Secure Heterogeneous Access Control Scheme for Wireless Body Area Network.
Omala, Anyembe Andrew; Mbandu, Angolo Shem; Mutiria, Kamenyi Domenic; Jin, Chunhua; Li, Fagen
2018-04-28
Wireless body area network (WBAN) provides a medium through which physiological information could be harvested and transmitted to application provider (AP) in real time. Integrating WBAN in a heterogeneous Internet of Things (IoT) ecosystem would enable an AP to monitor patients from anywhere and at anytime. However, the IoT roadmap of interconnected 'Things' is still faced with many challenges. One of the challenges in healthcare is security and privacy of streamed medical data from heterogeneously networked devices. In this paper, we first propose a heterogeneous signcryption scheme where a sender is in a certificateless cryptographic (CLC) environment while a receiver is in identity-based cryptographic (IBC) environment. We then use this scheme to design a heterogeneous access control protocol. Formal security proof for indistinguishability against adaptive chosen ciphertext attack and unforgeability against adaptive chosen message attack in random oracle model is presented. In comparison with some of the existing access control schemes, our scheme has lower computation and communication cost.
Two modified symplectic partitioned Runge-Kutta methods for solving the elastic wave equation
NASA Astrophysics Data System (ADS)
Su, Bo; Tuo, Xianguo; Xu, Ling
2017-08-01
Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.
Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow
NASA Technical Reports Server (NTRS)
Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.
1977-01-01
An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.
Asynchronous variational integration using continuous assumed gradient elements.
Wolff, Sebastian; Bucher, Christian
2013-03-01
Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.
A fast CT reconstruction scheme for a general multi-core PC.
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.
NASA Astrophysics Data System (ADS)
Havasi, Ágnes; Kazemi, Ehsan
2018-04-01
In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.
A Fast CT Reconstruction Scheme for a General Multi-Core PC
Zeng, Kai; Bai, Erwei; Wang, Ge
2007-01-01
Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731
Inelastic and Dynamic Fracture and Stress Analyses
NASA Technical Reports Server (NTRS)
Atluri, S. N.
1984-01-01
Large deformation inelastic stress analysis and inelastic and dynamic crack propagation research work is summarized. The salient topics of interest in engine structure analysis that are discussed herein include: (1) a path-independent integral (T) in inelastic fracture mechanics, (2) analysis of dynamic crack propagation, (3) generalization of constitutive relations of inelasticity for finite deformations , (4) complementary energy approaches in inelastic analyses, and (5) objectivity of time integration schemes in inelastic stress analysis.
Simurda, Matej; Duggen, Lars; Basse, Nils T; Lassen, Benny
2018-02-01
A numerical model for transit-time ultrasonic flowmeters operating under multiphase flow conditions previously presented by us is extended by mesh refinement and grid point redistribution. The method solves modified first-order stress-velocity equations of elastodynamics with additional terms to account for the effect of the background flow. Spatial derivatives are calculated by a Fourier collocation scheme allowing the use of the fast Fourier transform, while the time integration is realized by the explicit third-order Runge-Kutta finite-difference scheme. The method is compared against analytical solutions and experimental measurements to verify the benefit of using mapped grids. Additionally, a study of clamp-on and in-line ultrasonic flowmeters operating under multiphase flow conditions is carried out.
NASA Astrophysics Data System (ADS)
Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young
2016-07-01
Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.
Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young
2016-07-28
Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.
Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young
2016-01-01
Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes. PMID:27465296
Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung
2012-10-08
Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.
Foli, Samson; Ros-Tonen, Mirjam A F; Reed, James; Sunderland, Terry
2018-07-01
In recognition of the failures of sectoral approaches to overcome global challenges of biodiversity loss, climate change, food insecurity and poverty, scientific discourse on biodiversity conservation and sustainable development is shifting towards integrated landscape governance arrangements. Current landscape initiatives however very much depend on external actors and funding, raising the question of whether, and how, and under what conditions, locally embedded resource management schemes can serve as entry points for the implementation of integrated landscape approaches. This paper assesses the entry point potential for three established natural resource management schemes in West Africa that target landscape degradation with involvement of local communities: the Chantier d'Aménagement Forestier scheme encompassing forest management sites across Burkina Faso and the Modified Taungya System and community wildlife resource management initiatives in Ghana. Based on a review of the current literature, we analyze the extent to which design principles that define a landscape approach apply to these schemes. We found that the CREMA meets most of the desired criteria, but that its scale may be too limited to guarantee effective landscape governance, hence requiring upscaling. Conversely, the other two initiatives are strongly lacking in their design principles on fundamental components regarding integrated approaches, continual learning, and capacity building. Monitoring and evaluation bodies and participatory learning and negotiation platforms could enhance the schemes' alignment with integrated landscape approaches.
A reliable transmission protocol for ZigBee-based wireless patient monitoring.
Chen, Shyr-Kuen; Kao, Tsair; Chan, Chia-Tai; Huang, Chih-Ning; Chiang, Chih-Yen; Lai, Chin-Yu; Tung, Tse-Hua; Wang, Pi-Chung
2012-01-01
Patient monitoring systems are gaining their importance as the fast-growing global elderly population increases demands for caretaking. These systems use wireless technologies to transmit vital signs for medical evaluation. In a multihop ZigBee network, the existing systems usually use broadcast or multicast schemes to increase the reliability of signals transmission; however, both the schemes lead to significantly higher network traffic and end-to-end transmission delay. In this paper, we present a reliable transmission protocol based on anycast routing for wireless patient monitoring. Our scheme automatically selects the closest data receiver in an anycast group as a destination to reduce the transmission latency as well as the control overhead. The new protocol also shortens the latency of path recovery by initiating route recovery from the intermediate routers of the original path. On the basis of a reliable transmission scheme, we implement a ZigBee device for fall monitoring, which integrates fall detection, indoor positioning, and ECG monitoring. When the triaxial accelerometer of the device detects a fall, the current position of the patient is transmitted to an emergency center through a ZigBee network. In order to clarify the situation of the fallen patient, 4-s ECG signals are also transmitted. Our transmission scheme ensures the successful transmission of these critical messages. The experimental results show that our scheme is fast and reliable. We also demonstrate that our devices can seamlessly integrate with the next generation technology of wireless wide area network, worldwide interoperability for microwave access, to achieve real-time patient monitoring.
Lin, Yuehe; Bennett, Wendy D.; Timchalk, Charles; Thrall, Karla D.
2004-03-02
Microanalytical systems based on a microfluidics/electrochemical detection scheme are described. Individual modules, such as microfabricated piezoelectrically actuated pumps and a microelectrochemical cell were integrated onto portable platforms. This allowed rapid change-out and repair of individual components by incorporating "plug and play" concepts now standard in PC's. Different integration schemes were used for construction of the microanalytical systems based on microfluidics/electrochemical detection. In one scheme, all individual modules were integrated in the surface of the standard microfluidic platform based on a plug-and-play design. Microelectrochemical flow cell which integrated three electrodes based on a wall-jet design was fabricated on polymer substrate. The microelectrochemical flow cell was then plugged directly into the microfluidic platform. Another integration scheme was based on a multilayer lamination method utilizing stacking modules with different functionality to achieve a compact microanalytical device. Application of the microanalytical system for detection of lead in, for example, river water and saliva samples using stripping voltammetry is described.
Comparison of Aircraft Models and Integration Schemes for Interval Management in the TRACON
NASA Technical Reports Server (NTRS)
Neogi, Natasha; Hagen, George E.; Herencia-Zapana, Heber
2012-01-01
Reusable models of common elements for communication, computation, decision and control in air traffic management are necessary in order to enable simulation, analysis and assurance of emergent properties, such as safety and stability, for a given operational concept. Uncertainties due to faults, such as dropped messages, along with non-linearities and sensor noise are an integral part of these models, and impact emergent system behavior. Flight control algorithms designed using a linearized version of the flight mechanics will exhibit error due to model uncertainty, and may not be stable outside a neighborhood of the given point of linearization. Moreover, the communication mechanism by which the sensed state of an aircraft is fed back to a flight control system (such as an ADS-B message) impacts the overall system behavior; both due to sensor noise as well as dropped messages (vacant samples). Additionally simulation of the flight controller system can exhibit further numerical instability, due to selection of the integration scheme and approximations made in the flight dynamics. We examine the theoretical and numerical stability of a speed controller under the Euler and Runge-Kutta schemes of integration, for the Maintain phase for a Mid-Term (2035-2045) Interval Management (IM) Operational Concept for descent and landing operations. We model uncertainties in communication due to missed ADS-B messages by vacant samples in the integration schemes, and compare the emergent behavior of the system, in terms of stability, via the boundedness of the final system state. Any bound on the errors incurred by these uncertainties will play an essential part in a composable assurance argument required for real-time, flight-deck guidance and control systems,. Thus, we believe that the creation of reusable models, which possess property guarantees, such as safety and stability, is an innovative and essential requirement to assessing the emergent properties of novel airspace concepts of operation.
NASA Astrophysics Data System (ADS)
Liu, Zhengguang; Li, Xiaoli
2018-05-01
In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.
Asynchronous discrete event schemes for PDEs
NASA Astrophysics Data System (ADS)
Stone, D.; Geiger, S.; Lord, G. J.
2017-08-01
A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.
Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou
2008-10-01
A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).
Evaluation of a new microphysical aerosol module in the ECMWF Integrated Forecasting System
NASA Astrophysics Data System (ADS)
Woodhouse, Matthew; Mann, Graham; Carslaw, Ken; Morcrette, Jean-Jacques; Schulz, Michael; Kinne, Stefan; Boucher, Olivier
2013-04-01
The Monitoring Atmospheric Composition and Climate II (MACC-II) project will provide a system for monitoring and predicting atmospheric composition. As part of the first phase of MACC, the GLOMAP-mode microphysical aerosol scheme (Mann et al., 2010, GMD) was incorporated within the ECMWF Integrated Forecasting System (IFS). The two-moment modal GLOMAP-mode scheme includes new particle formation, condensation, coagulation, cloud-processing, and wet and dry deposition. GLOMAP-mode is already incorporated as a module within the TOMCAT chemistry transport model and within the UK Met Office HadGEM3 general circulation model. The microphysical, process-based GLOMAP-mode scheme allows an improved representation of aerosol size and composition and can simulate aerosol evolution in the troposphere and stratosphere. The new aerosol forecasting and re-analysis system (known as IFS-GLOMAP) will also provide improved boundary conditions for regional air quality forecasts, and will benefit from assimilation of observed aerosol optical depths in near real time. Presented here is an evaluation of the performance of the IFS-GLOMAP system in comparison to in situ aerosol mass and number measurements, and remotely-sensed aerosol optical depth measurements. Future development will provide a fully-coupled chemistry-aerosol scheme, and the capability to resolve nitrate aerosol.
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2015-04-01
The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each direction on a time step. In each direction, they have tridiagonal structure. They are solved by the sweep method. An important advantage of the discrete-analytical schemes is that the values of derivatives at the boundaries of finite volume are calculated together with the values of the unknown functions. This technique is particularly attractive for problems with dominant convection, as it does not require artificial monotonization and limiters. The same idea of integrating factors is applied in temporal dimension to the stiff systems of equations describing chemical transformation models [2]. The proposed method is applicable for the problems involving convection-diffusion-reaction operators. The work has been partially supported by the Presidium of RAS under Program 43, and by the RFBR grants 14-01-00125 and 14-01-31482. References: 1. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, (2014) V.67, Issue 12, P. 2240-2256. 2. V.V.Penenko, E.A.Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220.
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
Mang, Andreas; Biros, George
2017-01-01
We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.
NASA Astrophysics Data System (ADS)
Liao, Feng; Zhang, Luming; Wang, Shanshan
2018-02-01
In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.
N-body simulations of star clusters
NASA Astrophysics Data System (ADS)
Engle, Kimberly Anne
1999-10-01
We investigate the structure and evolution of underfilling (i.e. non-Roche-lobe-filling) King model globular star clusters using N-body simulations. We model clusters with various underfilling factors and mass distributions to determine their evolutionary tracks and lifetimes. These models include a self-consistent galactic tidal field, mass loss due to stellar evolution, ejection, and evaporation, and binary evolution. We find that a star cluster that initially does not fill its Roche lobe can live many times longer than one that does initially fill its Roche lobe. After a few relaxation times, the cluster expands to fill its Roche lobe. We also find that the choice of initial mass function significantly affects the lifetime of the cluster. These simulations were performed on the GRAPE-4 (GRAvity PipE) special-purpose hardware with the stellar dynamics package ``Starlab.'' The GRAPE-4 system is a massively-parallel computer designed to calculate the force (and its first time derivative) due to N particles. Starlab's integrator ``kira'' employs a 4th- order Hermite scheme with hierarchical (block) time steps to evolve the stellar system. We discuss, in some detail, the design of the GRAPE-4 system and the manner in which the Hermite integration scheme with block time steps is implemented in the hardware.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.
Assimilating the Future for Better Forecasts and Earlier Warnings
NASA Astrophysics Data System (ADS)
Du, H.; Wheatcroft, E.; Smith, L. A.
2016-12-01
Multi-model ensembles have become popular tools to account for some of the uncertainty due to model inadequacy in weather and climate simulation-based predictions. The current multi-model forecasts focus on combining single model ensemble forecasts by means of statistical post-processing. Assuming each model is developed independently or with different primary target variables, each is likely to contain different dynamical strengths and weaknesses. Using statistical post-processing, such information is only carried by the simulations under a single model ensemble: no advantage is taken to influence simulations under the other models. A novel methodology, named Multi-model Cross Pollination in Time, is proposed for multi-model ensemble scheme with the aim of integrating the dynamical information regarding the future from each individual model operationally. The proposed approach generates model states in time via applying data assimilation scheme(s) to yield truly "multi-model trajectories". It is demonstrated to outperform traditional statistical post-processing in the 40-dimensional Lorenz96 flow. Data assimilation approaches are originally designed to improve state estimation from the past to the current time. The aim of this talk is to introduce a framework that uses data assimilation to improve model forecasts at future time (not to argue for any one particular data assimilation scheme). Illustration of applying data assimilation "in the future" to provide early warning of future high-impact events is also presented.
Robust Integration Schemes for Generalized Viscoplasticity with Internal-State Variables
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Li, W.; Wilt, Thomas E.
1997-01-01
The scope of the work in this presentation focuses on the development of algorithms for the integration of rate dependent constitutive equations. In view of their robustness; i.e., their superior stability and convergence properties for isotropic and anisotropic coupled viscoplastic-damage models, implicit integration schemes have been selected. This is the simplest in its class and is one of the most widely used implicit integrators at present.
NASA Astrophysics Data System (ADS)
Anastasiadis, Anastasios; Sandberg, Ingmar; Papaioannou, Athanasios; Georgoulis, Manolis; Tziotziou, Kostas; Jiggens, Piers; Hilgers, Alain
2015-04-01
We present a novel integrated prediction system, of both solar flares and solar energetic particle (SEP) events, which is in place to provide short-term warnings for hazardous solar radiation storms. FORSPEF system provides forecasting of solar eruptive events, such as solar flares with a projection to coronal mass ejections (CMEs) (occurrence and velocity) and the likelihood of occurrence of a SEP event. It also provides nowcasting of SEP events based on actual solar flare and CME near real-time alerts, as well as SEP characteristics (peak flux, fluence, rise time, duration) per parent solar event. The prediction of solar flares relies on a morphological method which is based on the sophisticated derivation of the effective connected magnetic field strength (Beff) of potentially flaring active-region (AR) magnetic configurations and it utilizes analysis of a large number of AR magnetograms. For the prediction of SEP events a new reductive statistical method has been implemented based on a newly constructed database of solar flares, CMEs and SEP events that covers a large time span from 1984-2013. The method is based on flare location (longitude), flare size (maximum soft X-ray intensity), and the occurrence (or not) of a CME. Warnings are issued for all > C1.0 soft X-ray flares. The warning time in the forecasting scheme extends to 24 hours with a refresh rate of 3 hours while the respective warning time for the nowcasting scheme depends on the availability of the near real-time data and falls between 15-20 minutes. We discuss the modules of the FORSPEF system, their interconnection and the operational set up. The dual approach in the development of FORPSEF (i.e. forecasting and nowcasting scheme) permits the refinement of predictions upon the availability of new data that characterize changes on the Sun and the interplanetary space, while the combined usage of solar flare and SEP forecasting methods upgrades FORSPEF to an integrated forecasting solution. This work has been funded through the "FORSPEF: FORecasting Solar Particle Events and Flares", ESA Contract No. 4000109641/13/NL/AK
Application of a symmetric total variation diminishing scheme to aerodynamics of rotors
NASA Astrophysics Data System (ADS)
Usta, Ebru
2002-09-01
The aerodynamics characteristics of rotors in hover have been studied on stretched non-orthogonal grids using spatially high order symmetric total variation diminishing (STVD) schemes. Several companion numerical viscosity terms have been tested. The effects of higher order metrics, higher order load integrations and turbulence effects on the rotor performance have been studied. Where possible, calculations for 1-D and 2-D benchmark problems have been done on uniform grids, and comparisons with exact solutions have been made to understand the dispersion and dissipation characteristics of these algorithms. A baseline finite volume methodology termed TURNS (Transonic Unsteady Rotor Navier-Stokes) is the starting point for this effort. The original TURNS solver solves the 3-D compressible Navier-Stokes equations in an integral form using a third order upwind scheme. It is first or second order accurate in time. In the modified solver, the inviscid flux at a cell face is decomposed into two parts. The first part of the flux is symmetric in space, while the second part consists of an upwind-biased numerical viscosity term. The symmetric part of the flux at the cell face is computed to fourth-, sixth- or eighth order accuracy in space. The numerical viscosity portion of the flux is computed using either a third order accurate MUSCL scheme or a fifth order WENO scheme. A number of results are presented for the two-bladed Caradonna-Tung rotor and for a four-bladed UH-60A rotor in hover. Comparisons with the original TURNS code, and experiments are given. Results are also presented on the effects of metrics calculations, load integration algorithms, and turbulence models on the solution accuracy. A total of 64 combinations were studied in this thesis work. For brevity, only a small subset of results highlighting the most important conclusions are presented. It should be noted that use of higher order formulations did not affect the temporal stability of the algorithm and did not require any reduction in the time step. The calculations show that the solution accuracy increases when the 3 rd order upwind scheme in the baseline algorithm is replaced with 4th and 6th order accurate symmetric flux calculations. A point of diminishing returns is reached as increasingly larger stencils are used on highly stretched grids. The numerical viscosity term, when computed with the third order MUSCL scheme, is very dissipative, and does not resolve the tip vortex well. The WENO5 scheme, on the other hand significantly improves the tip vortex capturing. The STVD6+WENO5 scheme, in particular gave the best combinations of solution accuracy and efficiency on stretched grids. Spatially fourth order accurate metric calculations were found to be beneficial, but should be used in conjunction with a limiter that drops the metric calculation to a second order accuracy in the vicinity of grid discontinuities. High order integration of loads was found to have a beneficial, but small effect on the computed loads. Replacing the Baldwin-Lomax turbulence model with a one equation Spalart-Allmaras model resulted in higher than expected profile power contributions. Nevertheless the one-equation model is recommended for its robustness, its ability to model separated flows at high thrust settings, and the natural manner in which turbulence in the rotor wake may be treated.
Intel Xeon Phi accelerated Weather Research and Forecasting (WRF) Goddard microphysics scheme
NASA Astrophysics Data System (ADS)
Mielikainen, J.; Huang, B.; Huang, A. H.-L.
2014-12-01
The Weather Research and Forecasting (WRF) model is a numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs. The WRF development is a done in collaboration around the globe. Furthermore, the WRF is used by academic atmospheric scientists, weather forecasters at the operational centers and so on. The WRF contains several physics components. The most time consuming one is the microphysics. One microphysics scheme is the Goddard cloud microphysics scheme. It is a sophisticated cloud microphysics scheme in the Weather Research and Forecasting (WRF) model. The Goddard microphysics scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. Compared to the earlier microphysics schemes, the Goddard scheme incorporates a large number of improvements. Thus, we have optimized the Goddard scheme code. In this paper, we present our results of optimizing the Goddard microphysics scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The Intel MIC is capable of executing a full operating system and entire programs rather than just kernels as the GPU does. The MIC coprocessor supports all important Intel development tools. Thus, the development environment is one familiar to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discussed in this paper. The results show that the optimizations improved performance of Goddard microphysics scheme on Xeon Phi 7120P by a factor of 4.7×. In addition, the optimizations reduced the Goddard microphysics scheme's share of the total WRF processing time from 20.0 to 7.5%. Furthermore, the same optimizations improved performance on Intel Xeon E5-2670 by a factor of 2.8× compared to the original code.
Sheng, Xinzhi; Feng, Zhen; Li, Bing
2013-04-20
We proposed and experimentally demonstrated all-optical packet-level time slot assignment scheme with two optical buffers cascaded. The function of time-slot interchange (TSI) was successfully implemented on two and three optical packets at a data rate of 10 Gb/s. Therefore, the functions of TSI on N packets should be implemented easily by the use of N-1 stage optical buffer. On the basis of the above experiment, we carried out the TSI experiment on four packets with the same two-stage experimental setup. Furthermore, packets compression on three optical packets was also carried out with the same experimental setup. The shortest guard time of the packets compression can reach to 13 ns due to the limit of FPGA's control accuracy. Due to the use of the same optical buffer, the proposed scheme has the advantages of simple and scalable configuration, modularization, and easy integration.
Local bounds preserving stabilization for continuous Galerkin discretization of hyperbolic systems
NASA Astrophysics Data System (ADS)
Mabuza, Sibusiso; Shadid, John N.; Kuzmin, Dmitri
2018-05-01
The objective of this paper is to present a local bounds preserving stabilized finite element scheme for hyperbolic systems on unstructured meshes based on continuous Galerkin (CG) discretization in space. A CG semi-discrete scheme with low order artificial dissipation that satisfies the local extremum diminishing (LED) condition for systems is used to discretize a system of conservation equations in space. The low order artificial diffusion is based on approximate Riemann solvers for hyperbolic conservation laws. In this case we consider both Rusanov and Roe artificial diffusion operators. In the Rusanov case, two designs are considered, a nodal based diffusion operator and a local projection stabilization operator. The result is a discretization that is LED and has first order convergence behavior. To achieve high resolution, limited antidiffusion is added back to the semi-discrete form where the limiter is constructed from a linearity preserving local projection stabilization operator. The procedure follows the algebraic flux correction procedure usually used in flux corrected transport algorithms. To further deal with phase errors (or terracing) common in FCT type methods, high order background dissipation is added to the antidiffusive correction. The resulting stabilized semi-discrete scheme can be discretized in time using a wide variety of time integrators. Numerical examples involving nonlinear scalar Burgers equation, and several shock hydrodynamics simulations for the Euler system are considered to demonstrate the performance of the method. For time discretization, Crank-Nicolson scheme and backward Euler scheme are utilized.
Islam, S K Hafizul; Khan, Muhammad Khurram; Li, Xiong
2015-01-01
Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.'s scheme for integrated electronic patient record (EPR) information system, which has been analyzed in this study. We have found that Wen's scheme still has the following inefficiencies: (1) the correctness of identity and password are not verified during the login and password change phases; (2) it is vulnerable to impersonation attack and privileged-insider attack; (3) it is designed without the revocation of lost/stolen smart card; (4) the explicit key confirmation and the no key control properties are absent, and (5) user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP) in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature.
Islam, SK Hafizul; Khan, Muhammad Khurram; Li, Xiong
2015-01-01
Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.’s scheme for integrated electronic patient record (EPR) information system, which has been analyzed in this study. We have found that Wen’s scheme still has the following inefficiencies: (1) the correctness of identity and password are not verified during the login and password change phases; (2) it is vulnerable to impersonation attack and privileged-insider attack; (3) it is designed without the revocation of lost/stolen smart card; (4) the explicit key confirmation and the no key control properties are absent, and (5) user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP) in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature. PMID:26263401
Parareal algorithms with local time-integrators for time fractional differential equations
NASA Astrophysics Data System (ADS)
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
2010-04-01
factorization scheme (Lower-Upper Symmetric Gauss- Seidel ) can be used for time integration. Additional convergence acceleration is achieved by the...of the full Stefan -Maxwell equations. The diffusive mass flux of species S is computed according to: for 1 for jS S S Sm j jm S j eS jd S S S j j j...approximate factorization scheme (Lower-Upper Symmetric Gauss- Seidel ). For steady state problems, equation (69) reduces to R=0 because ddU t
Universal block diagram based modeling and simulation schemes for fractional-order control systems.
Bai, Lu; Xue, Dingyü
2017-05-08
Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A single-stage flux-corrected transport algorithm for high-order finite-volume methods
Chaplin, Christopher; Colella, Phillip
2017-05-08
We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less
NASA Astrophysics Data System (ADS)
Aftosmis, Michael J.
1992-10-01
A new node based upwind scheme for the solution of the 3D Navier-Stokes equations on adaptively refined meshes is presented. The method uses a second-order upwind TVD scheme to integrate the convective terms, and discretizes the viscous terms with a new compact central difference technique. Grid adaptation is achieved through directional division of hexahedral cells in response to evolving features as the solution converges. The method is advanced in time with a multistage Runge-Kutta time stepping scheme. Two- and three-dimensional examples establish the accuracy of the inviscid and viscous discretization. These investigations highlight the ability of the method to produce crisp shocks, while accurately and economically resolving viscous layers. The representation of these and other structures is shown to be comparable to that obtained by structured methods. Further 3D examples demonstrate the ability of the adaptive algorithm to effectively locate and resolve multiple scale features in complex 3D flows with many interacting, viscous, and inviscid structures.
Generation of skeletal mechanism by means of projected entropy participation indices
NASA Astrophysics Data System (ADS)
Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica
2017-11-01
When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.
The space-time solution element method: A new numerical approach for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Scott, James R.; Chang, Sin-Chung
1995-01-01
This paper is one of a series of papers describing the development of a new numerical method for the Navier-Stokes equations. Unlike conventional numerical methods, the current method concentrates on the discrete simulation of both the integral and differential forms of the Navier-Stokes equations. Conservation of mass, momentum, and energy in space-time is explicitly provided for through a rigorous enforcement of both the integral and differential forms of the governing conservation laws. Using local polynomial expansions to represent the discrete primitive variables on each cell, fluxes at cell interfaces are evaluated and balanced using exact functional expressions. No interpolation or flux limiters are required. Because of the generality of the current method, it applies equally to the steady and unsteady Navier-Stokes equations. In this paper, we generalize and extend the authors' 2-D, steady state implicit scheme. A general closure methodology is presented so that all terms up through a given order in the local expansions may be retained. The scheme is also extended to nonorthogonal Cartesian grids. Numerous flow fields are computed and results are compared with known solutions. The high accuracy of the scheme is demonstrated through its ability to accurately resolve developing boundary layers on coarse grids. Finally, we discuss applications of the current method to the unsteady Navier-Stokes equations.
Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao
2016-11-25
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.
Pyshkin, P. V.; Luo, Da-Wei; Jing, Jun; You, J. Q.; Wu, Lian-Ao
2016-01-01
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol. PMID:27886234
NASA Technical Reports Server (NTRS)
Liu, Chao-Qun; Shan, H.; Jiang, L.
1999-01-01
Numerical investigation of flow separation over a NACA 0012 airfoil at large angles of attack has been carried out. The numerical calculation is performed by solving the full Navier-Stokes equations in generalized curvilinear coordinates. The second-order LU-SGS implicit scheme is applied for time integration. This scheme requires no tridiagonal inversion and is capable of being completely vectorized, provided the corresponding Jacobian matrices are properly selected. A fourth-order centered compact scheme is used for spatial derivatives. In order to reduce numerical oscillation, a sixth-order implicit filter is employed. Non-reflecting boundary conditions are imposed at the far-field and outlet boundaries to avoid possible non-physical wave reflection. Complex flow separation and vortex shedding phenomenon have been observed and discussed.
Note: Fully integrated 3.2 Gbps quantum random number generator with real-time extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiao-Guang; Nie, You-Qi; Liang, Hao
2016-07-15
We present a real-time and fully integrated quantum random number generator (QRNG) by measuring laser phase fluctuations. The QRNG scheme based on laser phase fluctuations is featured for its capability of generating ultra-high-speed random numbers. However, the speed bottleneck of a practical QRNG lies on the limited speed of randomness extraction. To close the gap between the fast randomness generation and the slow post-processing, we propose a pipeline extraction algorithm based on Toeplitz matrix hashing and implement it in a high-speed field-programmable gate array. Further, all the QRNG components are integrated into a module, including a compact and actively stabilizedmore » interferometer, high-speed data acquisition, and real-time data post-processing and transmission. The final generation rate of the QRNG module with real-time extraction can reach 3.2 Gbps.« less
Innovative hyperchaotic encryption algorithm for compressed video
NASA Astrophysics Data System (ADS)
Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang
2002-12-01
It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.
Structural dynamics payload loads estimates: User guide
NASA Technical Reports Server (NTRS)
Shanahan, T. G.; Engels, R. C.
1982-01-01
This User Guide with an overview of an integration scheme to determine the response of a launch vehicle with multiple payloads. Chapter II discusses the software package associated with the integration scheme together with several sample problems. A short cut version of the integration technique is also discussed. The Guide concludes with a list of references and the listings of the subroutines.
ERIC Educational Resources Information Center
Peterson, Matthew O.
2016-01-01
Science education researchers have turned their attention to the use of images in textbooks, both because pages are heavily illustrated and because visual literacy is an important aptitude for science students. Text-image integration in the textbook is described here as composition schemes in increasing degrees of integration: prose primary (PP),…
Fractional order implementation of Integral Resonant Control - A nanopositioning application.
San-Millan, Andres; Feliu-Batlle, Vicente; Aphale, Sumeet S
2017-10-04
By exploiting the co-located sensor-actuator arrangement in typical flexure-based piezoelectric stack actuated nanopositioners, the polezero interlacing exhibited by their axial frequency response can be transformed to a zero-pole interlacing by adding a constant feed-through term. The Integral Resonant Control (IRC) utilizes this unique property to add substantial damping to the dominant resonant mode by the use of a simple integrator implemented in closed loop. IRC used in conjunction with an integral tracking scheme, effectively reduces positioning errors introduced by modelling inaccuracies or parameter uncertainties. Over the past few years, successful application of the IRC control technique to nanopositioning systems has demonstrated performance robustness, easy tunability and versatility. The main drawback has been the relatively small positioning bandwidth achievable. This paper proposes a fractional order implementation of the classical integral tracking scheme employed in tandem with the IRC scheme to deliver damping and tracking. The fractional order integrator introduces an additional design parameter which allows desired pole-placement, resulting in superior closed loop bandwidth. Simulations and experimental results are presented to validate the theory. A 250% improvement in the achievable positioning bandwidth is observed with proposed fractional order scheme. Copyright © 2017. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Chan, William M.
1992-01-01
The following papers are presented: (1) numerical methods for the simulation of complex multi-body flows with applications for the Integrated Space Shuttle vehicle; (2) a generalized scheme for 3-D hyperbolic grid generation; (3) collar grids for intersecting geometric components within the Chimera overlapped grid scheme; and (4) application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows.
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit
2013-11-01
The next generation of QbD based pharmaceutical products will be manufactured through continuous processing. This will allow the integration of online/inline monitoring tools, coupled with an efficient advanced model-based feedback control systems, to achieve precise control of process variables, so that the predefined product quality can be achieved consistently. The direct compaction process considered in this study is highly interactive and involves time delays for a number of process variables due to sensor placements, process equipment dimensions, and the flow characteristics of the solid material. A simple feedback regulatory control system (e.g., PI(D)) by itself may not be sufficient to achieve the tight process control that is mandated by regulatory authorities. The process presented herein comprises of coupled dynamics involving slow and fast responses, indicating the requirement of a hybrid control scheme such as a combined MPC-PID control scheme. In this manuscript, an efficient system-wide hybrid control strategy for an integrated continuous pharmaceutical tablet manufacturing process via direct compaction has been designed. The designed control system is a hybrid scheme of MPC-PID control. An effective controller parameter tuning strategy involving an ITAE method coupled with an optimization strategy has been used for tuning of both MPC and PID parameters. The designed hybrid control system has been implemented in a first-principles model-based flowsheet that was simulated in gPROMS (Process System Enterprise). Results demonstrate enhanced performance of critical quality attributes (CQAs) under the hybrid control scheme compared to only PID or MPC control schemes, illustrating the potential of a hybrid control scheme in improving pharmaceutical manufacturing operations. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Fang, E-mail: fliu@lsec.cc.ac.cn; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720
We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit ofmore » using different self energy expressions to perform the numerical convolution at different frequencies.« less
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjogreen, B.; Sandham, N. D.; Hadjadj, A.; Kwak, Dochan (Technical Monitor)
2000-01-01
In a series of papers, Olsson (1994, 1995), Olsson & Oliger (1994), Strand (1994), Gerritsen Olsson (1996), Yee et al. (1999a,b, 2000) and Sandham & Yee (2000), the issue of nonlinear stability of the compressible Euler and Navier-Stokes Equations, including physical boundaries, and the corresponding development of the discrete analogue of nonlinear stable high order schemes, including boundary schemes, were developed, extended and evaluated for various fluid flows. High order here refers to spatial schemes that are essentially fourth-order or higher away from shock and shear regions. The objective of this paper is to give an overview of the progress of the low dissipative high order shock-capturing schemes proposed by Yee et al. (1999a,b, 2000). This class of schemes consists of simple non-dissipative high order compact or non-compact central spatial differencings and adaptive nonlinear numerical dissipation operators to minimize the use of numerical dissipation. The amount of numerical dissipation is further minimized by applying the scheme to the entropy splitting form of the inviscid flux derivatives, and by rewriting the viscous terms to minimize odd-even decoupling before the application of the central scheme (Sandham & Yee). The efficiency and accuracy of these scheme are compared with spectral, TVD and fifth- order WENO schemes. A new approach of Sjogreen & Yee (2000) utilizing non-orthogonal multi-resolution wavelet basis functions as sensors to dynamically determine the appropriate amount of numerical dissipation to be added to the non-dissipative high order spatial scheme at each grid point will be discussed. Numerical experiments of long time integration of smooth flows, shock-turbulence interactions, direct numerical simulations of a 3-D compressible turbulent plane channel flow, and various mixing layer problems indicate that these schemes are especially suitable for practical complex problems in nonlinear aeroacoustics, rotorcraft dynamics, direct numerical simulation or large eddy simulation of compressible turbulent flows at various speeds including high-speed shock-turbulence interactions, and general long time wave propagation problems. These schemes, including entropy splitting, have also been extended to freestream preserving schemes on curvilinear moving grids for a thermally perfect gas (Vinokur & Yee 2000).
Studies in integrated line-and packet-switched computer communication systems
NASA Astrophysics Data System (ADS)
Maglaris, B. S.
1980-06-01
The problem of efficiently allocating the bandwidth of a trunk to both types of traffic is handled for various system and traffic models. A performance analysis is carried out both for variable and fixed frame schemes. It is shown that variable frame schemes, adjusting the frame length according to the traffic variations, offer better trunk utilization at the cost of the additional hardware and software complexity needed because of the lack of synchronization. An optimization study on the fixed frame schemes follows. The problem of dynamically allocating the fixed frame to both types of traffic is formulated as a Markovian Decision process. It is shown that the movable boundary scheme, suggested for commercial implementations of integrated multiplexors, offers optimal or near optimal performance and simplicity of implementation. Finally, the behavior of the movable boundary integrated scheme is studied for tandem link connections. Under the assumptions made for the line-switched traffic, the forward allocation technique is found to offer the best alternative among different path set-up strategies.
NASA Astrophysics Data System (ADS)
Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit
2017-07-01
In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.
Finite-difference model for 3-D flow in bays and estuaries
Smith, Peter E.; Larock, Bruce E.; ,
1993-01-01
This paper describes a semi-implicit finite-difference model for the numerical solution of three-dimensional flow in bays and estuaries. The model treats the gravity wave and vertical diffusion terms in the governing equations implicitly, and other terms explicitly. The model achieves essentially second-order accurate and stable solutions in strongly nonlinear problems by using a three-time-level leapfrog-trapezoidal scheme for the time integration.
Song, Jong-Won; Hirao, Kimihiko
2015-10-14
Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular and periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.
Comparison of multiple atmospheric chemistry schemes in C-IFS
NASA Astrophysics Data System (ADS)
Flemming, Johannes; Huijnen, Vincent; Arteta, Joaquim; Stein, Olaf; Inness, Antje; Josse, Beatrice; Schultz, Martin; Peuch, Vincent-Henri
2013-04-01
As part of the MACCII -project (EU-FP7) ECMWF's integrated forecast system (IFS) is being extended by modules for chemistry, deposition and emission of reactive gases. This integration of the chemistry complements the integration of aerosol processes in IFS (Composition-IFS). C-IFS provides global forecasts and analysis of atmospheric composition. Its main motivation is to utilize the IFS for the assimilation of satellite observation of atmospheric composition. Furthermore, the integration of chemistry packages directly into IFS will achieve better consistency in terms of the treatment of physical processes and has the potential for simulating interactions between atmospheric composition and meteorology. Atmospheric chemistry in C-IFS can be represented by the modified CB05 scheme as implemented in the TM5 model and the RACMOBUS scheme as implemented in the MOCAGE model. An implementation of the scheme of the MOZART 3.5 model is ongoing. We will present the latest progress in the development and application of C-IFS. We will focus on the comparison of the different chemistry schemes in an otherwise identical C-IFS model setup (emissions, meteorology) as well as in their original Chemistry and Transport Model setup.
NASA Astrophysics Data System (ADS)
Gaudreau, Louis; Bogan, Alex; Korkusinski, Marek; Studenikin, Sergei; Austing, D. Guy; Sachrajda, Andrew S.
2017-09-01
Long distance entanglement distribution is an important problem for quantum information technologies to solve. Current optical schemes are known to have fundamental limitations. A coherent photon-to-spin interface built with quantum dots (QDs) in a direct bandgap semiconductor can provide a solution for efficient entanglement distribution. QD circuits offer integrated spin processing for full Bell state measurement (BSM) analysis and spin quantum memory. Crucially the photo-generated spins can be heralded by non-destructive charge detection techniques. We review current schemes to transfer a polarization-encoded state or a time-bin-encoded state of a photon to the state of a spin in a QD. The spin may be that of an electron or that of a hole. We describe adaptations of the original schemes to employ heavy holes which have a number of attractive properties including a g-factor that is tunable to zero for QDs in an appropriately oriented external magnetic field. We also introduce simple throughput scaling models to demonstrate the potential performance advantage of full BSM capability in a QD scheme, even when the quantum memory is imperfect, over optical schemes relying on linear optical elements and ensemble quantum memories.
Study on the security of the authentication scheme with key recycling in QKD
NASA Astrophysics Data System (ADS)
Li, Qiong; Zhao, Qiang; Le, Dan; Niu, Xiamu
2016-09-01
In quantum key distribution (QKD), the information theoretically secure authentication is necessary to guarantee the integrity and authenticity of the exchanged information over the classical channel. In order to reduce the key consumption, the authentication scheme with key recycling (KR), in which a secret but fixed hash function is used for multiple messages while each tag is encrypted with a one-time pad (OTP), is preferred in QKD. Based on the assumption that the OTP key is perfect, the security of the authentication scheme has be proved. However, the OTP key of authentication in a practical QKD system is not perfect. How the imperfect OTP affects the security of authentication scheme with KR is analyzed thoroughly in this paper. In a practical QKD, the information of the OTP key resulting from QKD is partially leaked to the adversary. Although the information leakage is usually so little to be neglected, it will lead to the increasing degraded security of the authentication scheme as the system runs continuously. Both our theoretical analysis and simulation results demonstrate that the security level of authentication scheme with KR, mainly indicated by its substitution probability, degrades exponentially in the number of rounds and gradually diminishes to zero.
NASA Astrophysics Data System (ADS)
Herman, M. W.; Furlong, K. P.; Hayes, G. P.; Benz, H.
2014-12-01
Strong motion accelerometers can record large amplitude shaking on-scale in the near-field of large earthquake ruptures; however, numerical integration of such records to determine displacement is typically unstable due to baseline changes (i.e., distortions in the zero value) that occur during strong shaking. We use datasets from the 2011 Mw 9.0 Tohoku earthquake to assess whether a relatively simple empirical correction scheme (Boore et al., 2002) can return accurate displacement waveforms useful for constraining details of the fault slip. The coseismic deformation resulting from the Tohoku earthquake was recorded by the Kiban Kyoshin network (KiK-net) of strong motion instruments as well as by a dense network of high-rate (1 Hz) GPS instruments. After baseline correcting the KiK-net records and integrating to displacement, over 85% of the KiK-net borehole instrument waveforms and over 75% of the KiK-net surface instrument waveforms match collocated 1 Hz GPS displacement time series. Most of the records that do not match the GPS-derived displacements following the baseline correction have large, systematic drifts that can be automatically identified by examining the slopes in the first 5-10 seconds of the velocity time series. We apply the same scheme to strong motion records from the 2014 Mw 8.2 Iquique earthquake. Close correspondence in both direction and amplitude between coseismic static offsets derived from the integrated strong motion time series and those predicted from a teleseismically-derived finite fault model, as well as displacement amplitudes consistent with InSAR-derived results, suggest that the correction scheme works successfully for the Iquique event. In the absence of GPS displacements, these strong motion-derived offsets provide constraints on the overall distribution of slip on the fault. In addition, the coseismic strong motion-derived displacement time series (50-100 s long) contain a near-field record of the temporal evolution of the rupture, supplementing teleseismic data and improving resolution of the location and timing of moment in finite fault models.
Development Of A Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Kwak, Dochan
1993-01-01
Report discusses aspects of development of CENS3D computer code, solving three-dimensional Navier-Stokes equations of compressible, viscous, unsteady flow. Implements implicit finite-difference or finite-volume numerical-integration scheme, called "lower-upper symmetric-Gauss-Seidel" (LU-SGS), offering potential for very low computer time per iteration and for fast convergence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohsuga, Ken; Takahashi, Hiroyuki R.
2016-02-20
We develop a numerical scheme for solving the equations of fully special relativistic, radiation magnetohydrodynamics (MHDs), in which the frequency-integrated, time-dependent radiation transfer equation is solved to calculate the specific intensity. The radiation energy density, the radiation flux, and the radiation stress tensor are obtained by the angular quadrature of the intensity. In the present method, conservation of total mass, momentum, and energy of the radiation magnetofluids is guaranteed. We treat not only the isotropic scattering but also the Thomson scattering. The numerical method of MHDs is the same as that of our previous work. The advection terms are explicitlymore » solved, and the source terms, which describe the gas–radiation interaction, are implicitly integrated. Our code is suitable for massive parallel computing. We present that our code shows reasonable results in some numerical tests for propagating radiation and radiation hydrodynamics. Particularly, the correct solution is given even in the optically very thin or moderately thin regimes, and the special relativistic effects are nicely reproduced.« less
NASA Astrophysics Data System (ADS)
Usvyat, Denis; Maschio, Lorenzo; Manby, Frederick R.; Casassa, Silvia; Schütz, Martin; Pisani, Cesare
2007-08-01
A density fitting scheme for calculating electron repulsion integrals used in local second order Møller-Plesset perturbation theory for periodic systems (DFP) is presented. Reciprocal space techniques are systematically adopted, for which the use of Poisson fitting functions turned out to be instrumental. The role of the various parameters (truncation thresholds, density of the k net, Coulomb versus overlap metric, etc.) on computational times and accuracy is explored, using as test cases primitive-cell- and conventional-cell-diamond, proton-ordered ice, crystalline carbon dioxide, and a three-layer slab of magnesium oxide. Timings and results obtained when the electron repulsion integrals are calculated without invoking the DFP approximation, are taken as the reference. It is shown that our DFP scheme is both accurate and very efficient once properly calibrated. The lattice constant and cohesion energy of the CO2 crystal are computed to illustrate the capabilities of providing a physically correct description also for weakly bound crystals, in strong contrast to present density functional approaches.
NASA Astrophysics Data System (ADS)
Rahman, Syazila; Yusoff, Mohd. Zamri; Hasini, Hasril
2012-06-01
This paper describes the comparison between the cell centered scheme and cell vertex scheme in the calculation of high speed compressible flow properties. The calculation is carried out using Computational Fluid Dynamic (CFD) in which the mass, momentum and energy equations are solved simultaneously over the flow domain. The geometry under investigation consists of a Binnie and Green convergent-divergent nozzle and structured mesh scheme is implemented throughout the flow domain. The finite volume CFD solver employs second-order accurate central differencing scheme for spatial discretization. In addition, the second-order accurate cell-vertex finite volume spatial discretization is also introduced in this case for comparison. The multi-stage Runge-Kutta time integration is implemented for solving a set of non-linear governing equations with variables stored at the vertices. Artificial dissipations used second and fourth order terms with pressure switch to detect changes in pressure gradient. This is important to control the solution stability and capture shock discontinuity. The result is compared with experimental measurement and good agreement is obtained for both cases.
NASA Technical Reports Server (NTRS)
Kwon, Dong-Soo
1991-01-01
All research results about flexible manipulator control were integrated to show a control scenario of a bracing manipulator. First, dynamic analysis of a flexible manipulator was done for modeling. Second, from the dynamic model, the inverse dynamic equation was derived, and the time domain inverse dynamic method was proposed for the calculation of the feedforward torque and the desired flexible coordinate trajectories. Third, a tracking controller was designed by combining the inverse dynamic feedforward control with the joint feedback control. The control scheme was applied to the tip position control of a single link flexible manipulator for zero and non-zero initial condition cases. Finally, the contact control scheme was added to the position tracking control. A control scenario of a bracing manipulator is provided and evaluated through simulation and experiment on a single link flexible manipulator.
An integrated control scheme for space robot after capturing non-cooperative target
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2018-06-01
How to identify the mass properties and eliminate the unknown angular momentum of space robotic system after capturing a non-cooperative target is of great challenge. This paper focuses on designing an integrated control framework which includes detumbling strategy, coordination control and parameter identification. Firstly, inverted and forward chain approaches are synthesized for space robot to obtain dynamic equation in operational space. Secondly, a detumbling strategy is introduced using elementary functions with normalized time, while the imposed end-effector constraints are considered. Next, a coordination control scheme for stabilizing both base and end-effector based on impedance control is implemented with the target's parameter uncertainty. With the measurements of the forces and torques exerted on the target, its mass properties are estimated during the detumbling process accordingly. Simulation results are presented using a 7 degree-of-freedom kinematically redundant space manipulator, which verifies the performance and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
McDonough, Kevin K.
The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of these sets for aircraft longitudinal and lateral aircraft dynamics are reported, and it is shown that these sets can be larger in size compared to the more commonly used safe sets. An approach to constrained maneuver planning based on chaining recoverable sets or integral safe sets is described and illustrated with a simulation example. To facilitate the application of this maneuver planning approach in aircraft loss of control (LOC) situations when the model is only identified at the current trim condition but when these sets need to be predicted at other flight conditions, the dependence trends of the safe and recoverable sets on aircraft flight conditions are characterized. The scaling procedure to estimate subsets of safe and recoverable sets at one trim condition based on their knowledge at another trim condition is defined. Finally, two control schemes that exploit integral safe sets are proposed. The first scheme, referred to as the controller state governor (CSG), resets the controller state (typically an integrator) to enforce the constraints and enlarge the set of plant states that can be recovered without constraint violation. The second scheme, referred to as the controller state and reference governor (CSRG), combines the controller state governor with the reference governor control architecture and provides the capability of simultaneously modifying the reference command and the controller state to enforce the constraints. Theoretical results that characterize the response properties of both schemes are presented. Examples are reported that illustrate the operation of these schemes on aircraft flight dynamics models and gas turbine engine dynamic models.
A comparison of two multi-variable integrator windup protection schemes
NASA Technical Reports Server (NTRS)
Mattern, Duane
1993-01-01
Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Li, Wei
1995-01-01
This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present first part of the report, we focus on the theoretical developments, and discussions of the results of numerical-performance studies using the integration schemes for GVIPS and NAV models.
Real-time, interactive animation of deformable two- and three-dimensional objects
Desbrun, Mathieu; Schroeder, Peter; Meyer, Mark; Barr, Alan H.
2003-06-03
A method of updating in real-time the locations and velocities of mass points of a two- or three-dimensional object represented by a mass-spring system. A modified implicit Euler integration scheme is employed to determine the updated locations and velocities. In an optional post-integration step, the updated locations are corrected to preserve angular momentum. A processor readable medium and a network server each tangibly embodying the method are also provided. A system comprising a processor in combination with the medium, and a system comprising the server in combination with a client for accessing the server over a computer network, are also provided.
Seismic waves in heterogeneous material: subcell resolution of the discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Castro, Cristóbal E.; Käser, Martin; Brietzke, Gilbert B.
2010-07-01
We present an important extension of the arbitrary high-order discontinuous Galerkin (DG) finite-element method to model 2-D elastic wave propagation in highly heterogeneous material. In this new approach we include space-variable coefficients to describe smooth or discontinuous material variations inside each element using the same numerical approximation strategy as for the velocity-stress variables in the formulation of the elastic wave equation. The combination of the DG method with a time integration scheme based on the solution of arbitrary accuracy derivatives Riemann problems still provides an explicit, one-step scheme which achieves arbitrary high-order accuracy in space and time. Compared to previous formulations the new scheme contains two additional terms in the form of volume integrals. We show that the increasing computational cost per element can be overcompensated due to the improved material representation inside each element as coarser meshes can be used which reduces the total number of elements and therefore computational time to reach a desired error level. We confirm the accuracy of the proposed scheme performing convergence tests and several numerical experiments considering smooth and highly heterogeneous material. As the approximation of the velocity and stress variables in the wave equation and of the material properties in the model can be chosen independently, we investigate the influence of the polynomial material representation on the accuracy of the synthetic seismograms with respect to computational cost. Moreover, we study the behaviour of the new method on strong material discontinuities, in the case where the mesh is not aligned with such a material interface. In this case second-order linear material approximation seems to be the best choice, with higher-order intra-cell approximation leading to potential instable behaviour. For all test cases we validate our solution against the well-established standard fourth-order finite difference and spectral element method.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
NASA Astrophysics Data System (ADS)
Mesinger, Fedor; Popovic, Jelena
2010-09-01
Ever since its introduction to meteorology in the early 1970s, the forward-backward scheme has proven to be a very efficient method of treating gravity waves, with an added bonus of avoiding the time computational mode of the leapfrog scheme. It has been and it is used today in a number of models. When used on a square grid other than the Arakawa C grid, modification is or modifications are available to suppress the noise-generating separation of solutions on elementary C grids. Yet, in spite of a number of papers addressing the scheme and its modification, or modifications, issues remain that have either not been addressed or have been commented upon in a misleading or even in an incorrect way. Specifically, restricting ourselves to the B/E grid does it matter and if so how which of the two equations, momentum and the continuity equation, is integrated forward? Is there just one modification suppressing the separation of solutions, or have there been proposed two modification schemes? Questions made are addressed and a number of misleading statements made are recalled and commented upon. In particular, it is demonstrated that there is no added computational cost in integrating the momentum equation forward, and it is pointed out that this would seem advantageous given the height perturbations excited in the first step following a perturbation at a single height point. Yet, 48-h numerical experiments with a full-physics model show only a barely visible difference between the forecasts done using one and the other equation forward.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.
Marsalek, Ondrej; Markland, Thomas E
2016-02-07
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding as a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.
Assessment of numerical techniques for unsteady flow calculations
NASA Technical Reports Server (NTRS)
Hsieh, Kwang-Chung
1989-01-01
The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.
A cut-cell immersed boundary technique for fire dynamics simulation
NASA Astrophysics Data System (ADS)
Vanella, Marcos; McDermott, Randall; Forney, Glenn
2015-11-01
Fire simulation around complex geometry is gaining increasing attention in performance based design of fire protection systems, fire-structure interaction and pollutant transport in complex terrains, among others. This presentation will focus on our present effort in improving the capability of FDS (Fire Dynamics Simulator, developed at the Fire Research Division, NIST. https://github.com/firemodels/fds-smv) to represent fire scenarios around complex bodies. Velocities in the vicinity of the bodies are reconstructed using a classical immersed boundary scheme (Fadlun and co-workers, J. Comput. Phys., 161:35-60, 2000). Also, a conservative treatment of scalar transport equations (i.e. for chemical species) will be presented. In our method, discrete conservation and no penetration of species across solid boundaries are enforced using a cut-cell finite volume scheme. The small cell problem inherent to the method is tackled using explicit-implicit domain decomposition for scalar, within the FDS time integration scheme. Some details on the derivation, implementation and numerical tests of this numerical scheme will be discussed.
Short-Term Retrospective Land Data Assimilation Schemes
NASA Technical Reports Server (NTRS)
Houser, P. R.; Cosgrove, B. A.; Entin, J. K.; Lettenmaier, D.; ODonnell, G.; Mitchell, K.; Marshall, C.; Lohmann, D.; Schaake, J. C.; Duan, Q.;
2000-01-01
Subsurface moisture and temperature and snow/ice stores exhibit persistence on various time scales that has important implications for the extended prediction of climatic and hydrologic extremes. Hence, to improve their specification of the land surface, many numerical weather prediction (NWP) centers have incorporated complex land surface schemes in their forecast models. However, because land storages are integrated states, errors in NWP forcing accumulates in these stores, which leads to incorrect surface water and energy partitioning. This has motivated the development of Land Data Assimilation Schemes (LDAS) that can be used to constrain NWP surface storages. An LDAS is an uncoupled land surface scheme that is forced primarily by observations, and is therefore less affected by NWP forcing biases. The implementation of an LDAS also provides the opportunity to correct the model's trajectory using remotely-sensed observations of soil temperature, soil moisture, and snow using data assimilation methods. The inclusion of data assimilation in LDAS will greatly increase its predictive capacity, as well as provide high-quality land surface assimilated data.
Conversion and Extraction of Insoluble Organic Materials in Meteorites
NASA Technical Reports Server (NTRS)
Locke, Darren R.; Burton, Aaron S.; Niles, Paul B.
2016-01-01
We endeavor to develop and implement methods in our laboratory to convert and extract insoluble organic materials (IOM) from low car-bon bearing meteorites (such as ordinary chondrites) and Precambrian terrestrial rocks for the purpose of determining IOM structure and prebiotic chemistries preserved in these types of samples. The general scheme of converting and extracting IOM in samples is summarized in Figure 1. First, powdered samples are solvent extracted in a micro-Soxhlet apparatus multiple times using solvents ranging from non-polar to polar (hexane - non-polar, dichloromethane - non-polar to polar, methanol - polar protic, and acetonitrile - polar aprotic). Second, solid residue from solvent extractions is processed using strong acids, hydrochloric and hydrofluoric, to dissolve minerals and isolate IOM. Third, the isolated IOM is subjected to both thermal (pyrolysis) and chemical (oxidation) degradation to release compounds from the macromolecular material. Finally, products from oxidation and pyrolysis are analyzed by gas chromatography - mass spectrometry (GCMS). We are working toward an integrated method and analysis scheme that will allow us to determine prebiotic chemistries in ordinary chondrites and Precambrian terrestrial rocks. Powerful techniques that we are including are stepwise, flash, and gradual pyrolysis and ruthenium tetroxide oxidation. More details of the integrated scheme will be presented.
Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus
2010-01-01
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031
Organization of functional interaction of corporate information systems
NASA Astrophysics Data System (ADS)
Safronov, V. V.; Barabanov, V. F.; Podvalniy, S. L.; Nuzhnyy, A. M.
2018-03-01
In this article the methods of specialized software systems integration are analyzed and the concept of seamless integration of production decisions is offered. In view of this concept developed structural and functional schemes of the specialized software are shown. The proposed schemes and models are improved for a machine-building enterprise.
Real-time ultrasonic weld evaluation system
NASA Astrophysics Data System (ADS)
Katragadda, Gopichand; Nair, Satish; Liu, Harry; Brown, Lawrence M.
1996-11-01
Ultrasonic testing techniques are currently used as an alternative to radiography for detecting, classifying,and sizing weld defects, and for evaluating weld quality. Typically, ultrasonic weld inspections are performed manually, which require significant operator expertise and time. Thus, in recent years, the emphasis is to develop automated methods to aid or replace operators in critical weld inspections where inspection time, reliability, and operator safety are major issues. During this period, significant advances wee made in the areas of weld defect classification and sizing. Very few of these methods, however have found their way into the market, largely due to the lack of an integrated approach enabling real-time implementation. Also, not much research effort was directed in improving weld acceptance criteria. This paper presents an integrated system utilizing state-of-the-art techniques for a complete automation of the weld inspection procedure. The modules discussed include transducer tracking, classification, sizing, and weld acceptance criteria. Transducer tracking was studied by experimentally evaluating sonic and optical position tracking techniques. Details for this evaluation are presented. Classification is obtained using a multi-layer perceptron. Results from different feature extraction schemes, including a new method based on a combination of time and frequency-domain signal representations are given. Algorithms developed to automate defect registration and sizing are discussed. A fuzzy-logic acceptance criteria for weld acceptance is presented describing how this scheme provides improved robustness compared to the traditional flow-diagram standards.
Unifying time evolution and optimization with matrix product states
NASA Astrophysics Data System (ADS)
Haegeman, Jutho; Lubich, Christian; Oseledets, Ivan; Vandereycken, Bart; Verstraete, Frank
2016-10-01
We show that the time-dependent variational principle provides a unifying framework for time-evolution methods and optimization methods in the context of matrix product states. In particular, we introduce a new integration scheme for studying time evolution, which can cope with arbitrary Hamiltonians, including those with long-range interactions. Rather than a Suzuki-Trotter splitting of the Hamiltonian, which is the idea behind the adaptive time-dependent density matrix renormalization group method or time-evolving block decimation, our method is based on splitting the projector onto the matrix product state tangent space as it appears in the Dirac-Frenkel time-dependent variational principle. We discuss how the resulting algorithm resembles the density matrix renormalization group (DMRG) algorithm for finding ground states so closely that it can be implemented by changing just a few lines of code and it inherits the same stability and efficiency. In particular, our method is compatible with any Hamiltonian for which ground-state DMRG can be implemented efficiently. In fact, DMRG is obtained as a special case of our scheme for imaginary time evolution with infinite time step.
Steady-state and dynamic analysis of a jet engine, gas lubricated shaft seal
NASA Technical Reports Server (NTRS)
Shapiro, W.; Colsher, R.
1974-01-01
Dynamic response of a gas-lubricated, jet-engine main shaft seal was analytically established as a function of collar misalignment and secondary seal friction. Response was obtained by a forward integration-in-time (time-transient) scheme, which traces a time history of seal motions in all its degrees of freedom. Results were summarized in the form of a seal tracking map which indicated regions of acceptable collar misalignments and secondary seal friction. Methodology, results and interpretations are comprehensively described.
Broadly tunable, low timing jitter, high repetition rate optoelectronic comb generator
Metcalf, A. J.; Quinlan, F.; Fortier, T. M.; Diddams, S. A.; Weiner, A. M.
2016-01-01
We investigate the low timing jitter properties of a tunable single-pass optoelectronic frequency comb generator. The scheme is flexible in that both the repetition rate and center frequency can be continuously tuned. When operated with 10 GHz comb spacing, the integrated residual pulse-to-pulse timing jitter is 11.35 fs (1 Hz to 10 MHz) with no feedback stabilization. The corresponding phase noise at 1 Hz offset from the photodetected 10 GHz carrier is −100 dBc/Hz. PMID:26865734
General relaxation schemes in multigrid algorithms for higher order singularity methods
NASA Technical Reports Server (NTRS)
Oskam, B.; Fray, J. M. J.
1981-01-01
Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor
2013-04-01
We finish the definition of a subtraction scheme for computing NNLO corrections to QCD jet cross sections. In particular, we perform the integration of the soft-type contributions to the doubly unresolved counterterms via the method of Mellin-Barnes representations. With these final ingredients in place, the definition of the scheme is complete and the computation of fully differential rates for electron-positron annihilation into two and three jets at NNLO accuracy becomes feasible.
Lightweight ECC based RFID authentication integrated with an ID verifier transfer protocol.
He, Debiao; Kumar, Neeraj; Chilamkurti, Naveen; Lee, Jong-Hyouk
2014-10-01
The radio frequency identification (RFID) technology has been widely adopted and being deployed as a dominant identification technology in a health care domain such as medical information authentication, patient tracking, blood transfusion medicine, etc. With more and more stringent security and privacy requirements to RFID based authentication schemes, elliptic curve cryptography (ECC) based RFID authentication schemes have been proposed to meet the requirements. However, many recently published ECC based RFID authentication schemes have serious security weaknesses. In this paper, we propose a new ECC based RFID authentication integrated with an ID verifier transfer protocol that overcomes the weaknesses of the existing schemes. A comprehensive security analysis has been conducted to show strong security properties that are provided from the proposed authentication scheme. Moreover, the performance of the proposed authentication scheme is analyzed in terms of computational cost, communicational cost, and storage requirement.
Top-up injection schemes for future circular lepton collider
NASA Astrophysics Data System (ADS)
Aiba, M.; Goddard, B.; Oide, K.; Papaphilippou, Y.; Saá Hernández, Á.; Shwartz, D.; White, S.; Zimmermann, F.
2018-02-01
Top-up injection is an essential ingredient for the future circular lepton collider (FCC-ee) to maximize the integrated luminosity and it determines the design performance. In ttbar operation mode, with a beam energy of 175 GeV, the design lifetime of ∼1 h is the shortest of the four anticipated operational modes, and the beam lifetime may be even shorter in actual operation. A highly robust top-up injection scheme is consequently imperative. Various top-up methods are investigated and a number of suitable schemes are considered in developing alternative designs for the injection straight section of the collider ring. For the first time, we consider multipole-kicker off-energy injection, for minimizing detector background in top-up operation, and the use of a thin wire septum in a lepton storage ring, for maximizing the luminosity.
Methods of separation of variables in turbulence theory
NASA Technical Reports Server (NTRS)
Tsuge, S.
1978-01-01
Two schemes of closing turbulent moment equations are proposed both of which make double correlation equations separated into single-point equations. The first is based on neglected triple correlation, leading to an equation differing from small perturbed gasdynamic equations where the separation constant appears as the frequency. Grid-produced turbulence is described in this light as time-independent, cylindrically-isotropic turbulence. Application to wall turbulence guided by a new asymptotic method for the Orr-Sommerfeld equation reveals a neutrally stable mode of essentially three dimensional nature. The second closure scheme is based on an assumption of identity of the separated variables through which triple and quadruple correlations are formed. The resulting equation adds, to its equivalent of the first scheme, an integral of nonlinear convolution in the frequency describing a role due to triple correlation of direct energy-cascading.
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Niu, Hailin; Zhang, Xiaotong; Liu, Qiang; Feng, Youbin; Li, Xiuhong; Zhang, Jialin; Cai, Erli
2015-12-01
The ocean surface albedo (OSA) is a deciding factor on ocean net surface shortwave radiation (ONSSR) estimation. Several OSA schemes have been proposed successively, but there is not a conclusion for the best OSA scheme of estimating the ONSSR. On the base of analyzing currently existing OSA parameterization, including Briegleb et al.(B), Taylor et al.(T), Hansen et al.(H), Jin et al.(J), Preisendorfer and Mobley(PM86), Feng's scheme(F), this study discusses the difference of OSA's impact on ONSSR estimation in condition of actual downward shortwave radiation(DSR). Then we discussed the necessity and applicability for the climate models to integrate the more complicated OSA scheme. It is concluded that the SZA and the wind speed are the two most significant effect factor to broadband OSA, thus the different OSA parameterizations varies violently in the regions of both high latitudes and strong winds. The OSA schemes can lead the ONSSR results difference of the order of 20 w m-2. The Taylor's scheme shows the best estimate, and Feng's result just following Taylor's. However, the accuracy of the estimated instantaneous OSA changes at different local time. Jin's scheme has the best performance generally at noon and in the afternoon, and PM86's is the best of all in the morning, which indicate that the more complicated OSA schemes reflect the temporal variation of OWA better than the simple ones.
Towards Flange-to-Flange Turbopump Simulations for Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Williams, Robert
2000-01-01
The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbopump geometry through numerical simulation will be of significant value toward design. This will help to improve safety of future space missions. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the MPI and MLP versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology. INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbopump simulations, moving boundary capability and an efficient time-accurate integration methods are build in the flow solver. To handle the geometric complexity and moving boundary problems, overset grid scheme is incorporated with the solver that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. The performance of the two time integration schemes for time-accurate computations is investigated. For an unsteady flow which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive. The current geometry for the LOX boost turbopump has various rotating and stationary components, such as inducer, stators, kicker, hydrolic turbine, where the flow is extremely unsteady. Figure 1 shows the geometry and computed surface pressure of the inducer. The inducer and the hydrolic turbine rotate in different rotational speed.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
Driven Langevin systems: fluctuation theorems and faithful dynamics
NASA Astrophysics Data System (ADS)
Sivak, David; Chodera, John; Crooks, Gavin
2014-03-01
Stochastic differential equations of motion (e.g., Langevin dynamics) provide a popular framework for simulating molecular systems. Any computational algorithm must discretize these equations, yet the resulting finite time step integration schemes suffer from several practical shortcomings. We show how any finite time step Langevin integrator can be thought of as a driven, nonequilibrium physical process. Amended by an appropriate work-like quantity (the shadow work), nonequilibrium fluctuation theorems can characterize or correct for the errors introduced by the use of finite time steps. We also quantify, for the first time, the magnitude of deviations between the sampled stationary distribution and the desired equilibrium distribution for equilibrium Langevin simulations of solvated systems of varying size. We further show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.
NASA Astrophysics Data System (ADS)
Ishida, H.; Ota, Y.; Sekiguchi, M.; Sato, Y.
2016-12-01
A three-dimensional (3D) radiative transfer calculation scheme is developed to estimate horizontal transport of radiation energy in a very high resolution (with the order of 10 m in spatial grid) simulation of cloud evolution, especially for horizontally inhomogeneous clouds such as shallow cumulus and stratocumulus. Horizontal radiative transfer due to inhomogeneous clouds seems to cause local heating/cooling in an atmosphere with a fine spatial scale. It is, however, usually difficult to estimate the 3D effects, because the 3D radiative transfer often needs a large resource for computation compared to a plane-parallel approximation. This study attempts to incorporate a solution scheme that explicitly solves the 3D radiative transfer equation into a numerical simulation, because this scheme has an advantage in calculation for a sequence of time evolution (i.e., the scene at a time is little different from that at the previous time step). This scheme is also appropriate to calculation of radiation with strong absorption, such as the infrared regions. For efficient computation, this scheme utilizes several techniques, e.g., the multigrid method for iteration solution, and a correlated-k distribution method refined for efficient approximation of the wavelength integration. For a case study, the scheme is applied to an infrared broadband radiation calculation in a broken cloud field generated with a large eddy simulation model. The horizontal transport of infrared radiation, which cannot be estimated by the plane-parallel approximation, and its variation in time can be retrieved. The calculation result elucidates that the horizontal divergences and convergences of infrared radiation flux are not negligible, especially at the boundaries of clouds and within optically thin clouds, and the radiative cooling at lateral boundaries of clouds may reduce infrared radiative heating in clouds. In a future work, the 3D effects on radiative heating/cooling will be able to be included into atmospheric numerical models.
Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation
Bechtold, Peter; Beljaars, Anton; Bozzo, Alessio; Pithan, Felix; Shepherd, Theodore G.; Zadra, Ayrton
2016-01-01
Abstract A recent intercomparison exercise proposed by the Working Group for Numerical Experimentation (WGNE) revealed that the parameterized, or unresolved, surface stress in weather forecast models is highly model‐dependent, especially over orography. Models of comparable resolution differ over land by as much as 20% in zonal mean total subgrid surface stress (τtot). The way τtot is partitioned between the different parameterizations is also model‐dependent. In this study, we simulated in a particular model an increase in τtot comparable with the spread found in the WGNE intercomparison. This increase was simulated in two ways, namely by increasing independently the contributions to τtot of the turbulent orographic form drag scheme (TOFD) and of the orographic low‐level blocking scheme (BLOCK). Increasing the parameterized orographic drag leads to significant changes in surface pressure, zonal wind and temperature in the Northern Hemisphere during winter both in 10 day weather forecasts and in seasonal integrations. However, the magnitude of these changes in circulation strongly depends on which scheme is modified. In 10 day forecasts, stronger changes are found when the TOFD stress is increased, while on seasonal time scales the effects are of comparable magnitude, although different in detail. At these time scales, the BLOCK scheme affects the lower stratosphere winds through changes in the resolved planetary waves which are associated with surface impacts, while the TOFD effects are mostly limited to the lower troposphere. The partitioning of τtot between the two schemes appears to play an important role at all time scales. PMID:27668040
Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation
NASA Astrophysics Data System (ADS)
Sandu, Irina; Bechtold, Peter; Beljaars, Anton; Bozzo, Alessio; Pithan, Felix; Shepherd, Theodore G.; Zadra, Ayrton
2016-03-01
A recent intercomparison exercise proposed by the Working Group for Numerical Experimentation (WGNE) revealed that the parameterized, or unresolved, surface stress in weather forecast models is highly model-dependent, especially over orography. Models of comparable resolution differ over land by as much as 20% in zonal mean total subgrid surface stress (τtot). The way τtot is partitioned between the different parameterizations is also model-dependent. In this study, we simulated in a particular model an increase in τtot comparable with the spread found in the WGNE intercomparison. This increase was simulated in two ways, namely by increasing independently the contributions to τtot of the turbulent orographic form drag scheme (TOFD) and of the orographic low-level blocking scheme (BLOCK). Increasing the parameterized orographic drag leads to significant changes in surface pressure, zonal wind and temperature in the Northern Hemisphere during winter both in 10 day weather forecasts and in seasonal integrations. However, the magnitude of these changes in circulation strongly depends on which scheme is modified. In 10 day forecasts, stronger changes are found when the TOFD stress is increased, while on seasonal time scales the effects are of comparable magnitude, although different in detail. At these time scales, the BLOCK scheme affects the lower stratosphere winds through changes in the resolved planetary waves which are associated with surface impacts, while the TOFD effects are mostly limited to the lower troposphere. The partitioning of τtot between the two schemes appears to play an important role at all time scales.
NASA Technical Reports Server (NTRS)
Estes, R. H.
1977-01-01
A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables.
Edgeworth expansions of stochastic trading time
NASA Astrophysics Data System (ADS)
Decamps, Marc; De Schepper, Ann
2010-08-01
Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Aftab; Khan, S. N.; Wilson, Brian G.
2011-07-06
A numerically efficient, accurate, and easily implemented integration scheme over convex Voronoi polyhedra (VP) is presented for use in ab initio electronic-structure calculations. We combine a weighted Voronoi tessellation with isoparametric integration via Gauss-Legendre quadratures to provide rapidly convergent VP integrals for a variety of integrands, including those with a Coulomb singularity. We showcase the capability of our approach by first applying it to an analytic charge-density model achieving machine-precision accuracy with expected convergence properties in milliseconds. For contrast, we compare our results to those using shape-functions and show our approach is greater than 10 5 times faster and 10more » 7 times more accurate. Furthermore, a weighted Voronoi tessellation also allows for a physics-based partitioning of space that guarantees convex, space-filling VP while reflecting accurate atomic size and site charges, as we show within KKR methods applied to Fe-Pd alloys.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schanen, Michel; Marin, Oana; Zhang, Hong
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
NASA Technical Reports Server (NTRS)
Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.
2007-01-01
Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.
Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg
2017-01-21
An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.
NASA Astrophysics Data System (ADS)
Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg
2017-01-01
An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi
2017-10-11
We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.
A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.
1989-01-01
A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.
An annular superposition integral for axisymmetric radiators.
Kelly, James F; McGough, Robert J
2007-02-01
A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a "smooth piston" function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity.
Quantum Logic with Cavity Photons From Single Atoms.
Holleczek, Annemarie; Barter, Oliver; Rubenok, Allison; Dilley, Jerome; Nisbet-Jones, Peter B R; Langfahl-Klabes, Gunnar; Marshall, Graham D; Sparrow, Chris; O'Brien, Jeremy L; Poulios, Konstantinos; Kuhn, Axel; Matthews, Jonathan C F
2016-07-08
We demonstrate quantum logic using narrow linewidth photons that are produced with an a priori nonprobabilistic scheme from a single ^{87}Rb atom strongly coupled to a high-finesse cavity. We use a controlled-not gate integrated into a photonic chip to entangle these photons, and we observe nonclassical correlations between photon detection events separated by periods exceeding the travel time across the chip by 3 orders of magnitude. This enables quantum technology that will use the properties of both narrow-band single photon sources and integrated quantum photonics.
Deng, Yong-Yuan; Chen, Chin-Ling; Tsaur, Woei-Jiunn; Tang, Yung-Wen; Chen, Jung-Hsuan
2017-12-15
As sensor networks and cloud computation technologies have rapidly developed over recent years, many services and applications integrating these technologies into daily life have come together as an Internet of Things (IoT). At the same time, aging populations have increased the need for expanded and more efficient elderly care services. Fortunately, elderly people can now wear sensing devices which relay data to a personal wireless device, forming a body area network (BAN). These personal wireless devices collect and integrate patients' personal physiological data, and then transmit the data to the backend of the network for related diagnostics. However, a great deal of the information transmitted by such systems is sensitive data, and must therefore be subject to stringent security protocols. Protecting this data from unauthorized access is thus an important issue in IoT-related research. In regard to a cloud healthcare environment, scholars have proposed a secure mechanism to protect sensitive patient information. Their schemes provide a general architecture; however, these previous schemes still have some vulnerability, and thus cannot guarantee complete security. This paper proposes a secure and lightweight body-sensor network based on the Internet of Things for cloud healthcare environments, in order to address the vulnerabilities discovered in previous schemes. The proposed authentication mechanism is applied to a medical reader to provide a more comprehensive architecture while also providing mutual authentication, and guaranteeing data integrity, user untraceability, and forward and backward secrecy, in addition to being resistant to replay attack.
HIDEC adaptive engine control system flight evaluation results
NASA Technical Reports Server (NTRS)
Yonke, W. A.; Landy, R. J.; Stewart, J. F.
1987-01-01
An integrated flight propulsion control mode, the Adaptive Engine Control System (ADECS), has been developed and flight tested on an F-15 aircraft as part of the NASA Highly Integrated Digital Electronic Control program. The ADECS system realizes additional engine thrust by increasing the engine pressure ratio (EPR) at intermediate and afterburning power, with the amount of EPR uptrim modulated using a predictor scheme for angle-of-attack and sideslip angle. Substantial improvement in aircraft and engine performance was demonstrated, with a 16 percent rate of climb increase, a 14 percent reduction in time to climb, and a 15 percent reduction in time to accelerate. Significant EPR uptrim capability was found with angles-of-attack up to 20 degrees.
Finite element implementation of state variable-based viscoplasticity models
NASA Technical Reports Server (NTRS)
Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.
1991-01-01
The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.
NASA Astrophysics Data System (ADS)
Feng, Xueshang; Li, Caixia; Xiang, Changqing; Zhang, Man; Li, HuiChao; Wei, Fengsi
2017-11-01
A second-order path-conservative scheme with a Godunov-type finite-volume method has been implemented to advance the equations of single-fluid solar wind plasma magnetohydrodynamics (MHD) in time. This code operates on the six-component composite grid system in three-dimensional spherical coordinates with hexahedral cells of quadrilateral frustum type. The generalized Osher-Solomon Riemann solver is employed based on a numerical integration of the path-dependent dissipation matrix. For simplicity, the straight line segment path is used, and the path integral is evaluated in a fully numerical way by a high-order numerical Gauss-Legendre quadrature. Besides its very close similarity to Godunov type, the resulting scheme retains the attractive features of the original solver: it is nonlinear, free of entropy-fix, differentiable, and complete, in that each characteristic field results in a different numerical viscosity, due to the full use of the MHD eigenstructure. By using a minmod limiter for spatial oscillation control, the path-conservative scheme is realized for the generalized Lagrange multiplier and the extended generalized Lagrange multiplier formulation of solar wind MHD systems. This new model that is second order in space and time is written in the FORTRAN language with Message Passing Interface parallelization and validated in modeling the time-dependent large-scale structure of the solar corona, driven continuously by Global Oscillation Network Group data. To demonstrate the suitability of our code for the simulation of solar wind, we present selected results from 2009 October 9 to 2009 December 29 show its capability of producing a structured solar corona in agreement with solar coronal observations.
Data-Driven Modeling of Solar Corona by a New 3d Path-Conservative Osher-Solomon MHD Odel
NASA Astrophysics Data System (ADS)
Feng, X. S.; Li, C.
2017-12-01
A second-order path-conservative scheme with Godunov-type finite volume method (FVM) has been implemented to advance the equations of single-fluid solar wind plasma magnetohydrodynamics (MHD) in time. This code operates on the six-component composite grid system in 3D spherical coordinates with hexahedral cells of quadrilateral frustum type. The generalized Osher-Solomon Riemann solver is employed based on a numerical integration of the path-dependentdissipation matrix. For simplicity, the straight line segment path is used and the path-integral is evaluated in a fully numerical way by high-order numerical Gauss-Legendre quadrature. Besides its closest similarity to Godunov, the resulting scheme retains the attractive features of the original solver: it is nonlinear, free of entropy-fix, differentiable and complete in that each characteristic field results in a different numerical viscosity, due to the full use of the MHD eigenstructure. By using a minmod limiter for spatial oscillation control, the pathconservative scheme is realized for the generalized Lagrange multiplier (GLM) and the extended generalized Lagrange multiplier (EGLM) formulation of solar wind MHD systems. This new model of second-order in space and time is written in FORTRAN language with Message Passing Interface (MPI) parallelization, and validated in modeling time-dependent large-scale structure of solar corona, driven continuously by the Global Oscillation Network Group (GONG) data. To demonstrate the suitability of our code for the simulation of solar wind, we present selected results from October 9th, 2009 to December 29th, 2009 , & Year 2008 to show its capability of producing structured solar wind in agreement with the observations.
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.; Uenal, A.
1981-01-01
A numerical scheme for solving two dimensional Fredholm integral equations of the second kind is developed. The proof of the convergence of the numerical scheme is shown for three cases: the case of periodic kernels, the case of semiperiodic kernels, and the case of nonperiodic kernels. Applications to the incompressible, stationary Navier-Stokes problem are of primary interest.
NASA Astrophysics Data System (ADS)
Želi, Velibor; Zorica, Dušan
2018-02-01
Generalization of the heat conduction equation is obtained by considering the system of equations consisting of the energy balance equation and fractional-order constitutive heat conduction law, assumed in the form of the distributed-order Cattaneo type. The Cauchy problem for system of energy balance equation and constitutive heat conduction law is treated analytically through Fourier and Laplace integral transform methods, as well as numerically by the method of finite differences through Adams-Bashforth and Grünwald-Letnikov schemes for approximation derivatives in temporal domain and leap frog scheme for spatial derivatives. Numerical examples, showing time evolution of temperature and heat flux spatial profiles, demonstrate applicability and good agreement of both methods in cases of multi-term and power-type distributed-order heat conduction laws.
An unstructured-mesh finite-volume MPDATA for compressible atmospheric dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kühnlein, Christian, E-mail: christian.kuehnlein@ecmwf.int; Smolarkiewicz, Piotr K., E-mail: piotr.smolarkiewicz@ecmwf.int
An advancement of the unstructured-mesh finite-volume MPDATA (Multidimensional Positive Definite Advection Transport Algorithm) is presented that formulates the error-compensative pseudo-velocity of the scheme to rely only on face-normal advective fluxes to the dual cells, in contrast to the full vector employed in previous implementations. This is essentially achieved by expressing the temporal truncation error underlying the pseudo-velocity in a form consistent with the flux-divergence of the governing conservation law. The development is especially important for integrating fluid dynamics equations on non-rectilinear meshes whenever face-normal advective mass fluxes are employed for transport compatible with mass continuity—the latter being essential for flux-formmore » schemes. In particular, the proposed formulation enables large-time-step semi-implicit finite-volume integration of the compressible Euler equations using MPDATA on arbitrary hybrid computational meshes. Furthermore, it facilitates multiple error-compensative iterations of the finite-volume MPDATA and improved overall accuracy. The advancement combines straightforwardly with earlier developments, such as the nonoscillatory option, the infinite-gauge variant, and moving curvilinear meshes. A comprehensive description of the scheme is provided for a hybrid horizontally-unstructured vertically-structured computational mesh for efficient global atmospheric flow modelling. The proposed finite-volume MPDATA is verified using selected 3D global atmospheric benchmark simulations, representative of hydrostatic and non-hydrostatic flow regimes. Besides the added capabilities, the scheme retains fully the efficacy of established finite-volume MPDATA formulations.« less
Schemes for efficient transmission of encoded video streams on high-speed networks
NASA Astrophysics Data System (ADS)
Ramanathan, Srinivas; Vin, Harrick M.; Rangan, P. Venkat
1994-04-01
In this paper, we argue that significant performance benefits can accrue if integrated networks implement application-specific mechanisms that account for the diversities in media compression schemes. Towards this end, we propose a simple, yet effective, strategy called Frame Induced Packet Discarding (FIPD), in which, upon detection of loss of a threshold number (determined by an application's video encoding scheme) of packets belonging to a video frame, the network attempts to discard all the remaining packets of that frame. In order to analytically quantify the performance of FIPD so as to obtain fractional frame losses that can be guaranteed to video channels, we develop a finite state, discrete time markov chain model of the FIPD strategy. The fractional frame loss thus computed can serve as the criterion for admission control at the network. Performance evaluations demonstrate the utility of the FIPD strategy.
Microelectromechanical reprogrammable logic device.
Hafiz, M A A; Kosuru, L; Younis, M I
2016-03-29
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme.
A numerical relativity scheme for cosmological simulations
NASA Astrophysics Data System (ADS)
Daverio, David; Dirian, Yves; Mitsou, Ermis
2017-12-01
Cosmological simulations involving the fully covariant gravitational dynamics may prove relevant in understanding relativistic/non-linear features and, therefore, in taking better advantage of the upcoming large scale structure survey data. We propose a new 3 + 1 integration scheme for general relativity in the case where the matter sector contains a minimally-coupled perfect fluid field. The original feature is that we completely eliminate the fluid components through the constraint equations, thus remaining with a set of unconstrained evolution equations for the rest of the fields. This procedure does not constrain the lapse function and shift vector, so it holds in arbitrary gauge and also works for arbitrary equation of state. An important advantage of this scheme is that it allows one to define and pass an adaptation of the robustness test to the cosmological context, at least in the case of pressureless perfect fluid matter, which is the relevant one for late-time cosmology.
Microelectromechanical reprogrammable logic device
Hafiz, M. A. A.; Kosuru, L.; Younis, M. I.
2016-01-01
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme. PMID:27021295
A novel double loop control model design for chemical unstable processes.
Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He
2014-03-01
In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.
NASA Astrophysics Data System (ADS)
Hegde, Ganapathi; Vaya, Pukhraj
2013-10-01
This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.
Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.
2015-10-01
The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsalek, Ondrej; Markland, Thomas E., E-mail: tmarkland@stanford.edu
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding asmore » a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.« less
NASA Astrophysics Data System (ADS)
Huang, Melin; Huang, Bormin; Huang, Allen H.-L.
2015-10-01
The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling and dynamic replanning.
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling an dynamic replanning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp
Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular andmore » periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.« less
Vehicle Integrated Prognostic Reasoner (VIPR) Metric Report
NASA Technical Reports Server (NTRS)
Cornhill, Dennis; Bharadwaj, Raj; Mylaraswamy, Dinkar
2013-01-01
This document outlines a set of metrics for evaluating the diagnostic and prognostic schemes developed for the Vehicle Integrated Prognostic Reasoner (VIPR), a system-level reasoner that encompasses the multiple levels of large, complex systems such as those for aircraft and spacecraft. VIPR health managers are organized hierarchically and operate together to derive diagnostic and prognostic inferences from symptoms and conditions reported by a set of diagnostic and prognostic monitors. For layered reasoners such as VIPR, the overall performance cannot be evaluated by metrics solely directed toward timely detection and accuracy of estimation of the faults in individual components. Among other factors, overall vehicle reasoner performance is governed by the effectiveness of the communication schemes between monitors and reasoners in the architecture, and the ability to propagate and fuse relevant information to make accurate, consistent, and timely predictions at different levels of the reasoner hierarchy. We outline an extended set of diagnostic and prognostics metrics that can be broadly categorized as evaluation measures for diagnostic coverage, prognostic coverage, accuracy of inferences, latency in making inferences, computational cost, and sensitivity to different fault and degradation conditions. We report metrics from Monte Carlo experiments using two variations of an aircraft reference model that supported both flat and hierarchical reasoning.
Bending and stretching finite element analysis of anisotropic viscoelastic composite plates
NASA Technical Reports Server (NTRS)
Hilton, Harry H.; Yi, Sung
1990-01-01
Finite element algorithms have been developed to analyze linear anisotropic viscoelastic plates, with or without holes, subjected to mechanical (bending, tension), temperature, and hygrothermal loadings. The analysis is based on Laplace transforms rather than direct time integrations in order to improve the accuracy of the results and save on extensive computational time and storage. The time dependent displacement fields in the transverse direction for the cross ply and angle ply laminates are calculated and the stacking sequence effects of the laminates are discussed in detail. Creep responses for the plates with or without a circular hole are also studied. The numerical results compare favorably with analytical solutions, i.e. within 1.8 percent for bending and 10(exp -3) 3 percent for tension. The tension results of the present method are compared with those using the direct time integration scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-04-01
The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less
Nadort, Annemarie; Woolthuis, Rutger G.; van Leeuwen, Ton G.; Faber, Dirk J.
2013-01-01
We present integrated Laser Speckle Contrast Imaging (LSCI) and Sidestream Dark Field (SDF) flowmetry to provide real-time, non-invasive and quantitative measurements of speckle decorrelation times related to microcirculatory flow. Using a multi exposure acquisition scheme, precise speckle decorrelation times were obtained. Applying SDF-LSCI in vitro and in vivo allows direct comparison between speckle contrast decorrelation and flow velocities, while imaging the phantom and microcirculation architecture. This resulted in a novel analysis approach that distinguishes decorrelation due to flow from other additive decorrelation sources. PMID:24298399
NASA Astrophysics Data System (ADS)
Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.
2012-08-01
We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining significant differences between the three approaches. All together, a full MRS forward response is calculated in about 20 s and scales so that on 10 processors the calculation time is reduced to about 3-4 s. The proposed approach is examined through synthetic data and through a field example, which demonstrate the capability of the scheme. The results of the field example agree well the information from an in-site borehole.
NASA Astrophysics Data System (ADS)
Somogyi, Gábor; Trócsányi, Zoltán
2008-08-01
In previous articles we outlined a subtraction scheme for regularizing doubly-real emission and real-virtual emission in next-to-next-to-leading order (NNLO) calculations of jet cross sections in electron-positron annihilation. In order to find the NNLO correction these subtraction terms have to be integrated over the factorized unresolved phase space and combined with the two-loop corrections. In this paper we perform the integration of all one-parton unresolved subtraction terms.
Quality Assurance in Engineering Education: Comparison of Accreditation Schemes and ISO 9001.
ERIC Educational Resources Information Center
Karapetrovic, Stanislav; Rajamani, Divakar; Willborn, Walter
1998-01-01
Outlines quality assurance schemes for distance-education technologies that are based on the ISO 9000 family of international quality-assurance standards. Argues that engineering faculties can establish such systems on the basis of and integrated with accreditation schemes. Contains 34 references. (DDR)
Optimal feedback control of turbulent channel flow
NASA Technical Reports Server (NTRS)
Bewley, Thomas; Choi, Haecheon; Temam, Roger; Moin, Parviz
1993-01-01
Feedback control equations were developed and tested for computing wall normal control velocities to control turbulent flow in a channel with the objective of reducing drag. The technique used is the minimization of a 'cost functional' which is constructed to represent some balance of the drag integrated over the wall and the net control effort. A distribution of wall velocities is found which minimizes this cost functional some time shortly in the future based on current observations of the flow near the wall. Preliminary direct numerical simulations of the scheme applied to turbulent channel flow indicates it provides approximately 17 percent drag reduction. The mechanism apparent when the scheme is applied to a simplified flow situation is also discussed.
Raul, Pramod R; Pagilla, Prabhakar R
2015-05-01
In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Han, Yaozhen; Liu, Xiangjie
2016-05-01
This paper presents a continuous higher-order sliding mode (HOSM) control scheme with time-varying gain for a class of uncertain nonlinear systems. The proposed controller is derived from the concept of geometric homogeneity and super-twisting algorithm, and includes two parts, the first part of which achieves smooth finite time stabilization of pure integrator chains. The second part conquers the twice differentiable uncertainty and realizes system robustness by employing super-twisting algorithm. Particularly, time-varying switching control gain is constructed to reduce the switching control action magnitude to the minimum possible value while keeping the property of finite time convergence. Examples concerning the perturbed triple integrator chains and excitation control for single-machine infinite bus power system are simulated respectively to demonstrate the effectiveness and applicability of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Zhao, Zhenguo; Shi, Wenbo
2014-01-01
Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.'s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications.
NASA Technical Reports Server (NTRS)
Allard, R.; Mack, B.; Bayoumi, M. M.
1989-01-01
Most robot systems lack a suitable hardware and software environment for the efficient research of new control and sensing schemes. Typically, engineers and researchers need to be experts in control, sensing, programming, communication and robotics in order to implement, integrate and test new ideas in a robot system. In order to reduce this time, the Robot Controller Test Station (RCTS) has been developed. It uses a modular hardware and software architecture allowing easy physical and functional reconfiguration of a robot. This is accomplished by emphasizing four major design goals: flexibility, portability, ease of use, and ease of modification. An enhanced distributed processing version of RCTS is described. It features an expanded and more flexible communication system design. Distributed processing results in the availability of more local computing power and retains the low cost of microprocessors. A large number of possible communication, control and sensing schemes can therefore be easily introduced and tested, using the same basic software structure.
Entropy Splitting and Numerical Dissipation
NASA Technical Reports Server (NTRS)
Yee, H. C.; Vinokur, M.; Djomehri, M. J.
1999-01-01
A rigorous stability estimate for arbitrary order of accuracy of spatial central difference schemes for initial-boundary value problems of nonlinear symmetrizable systems of hyperbolic conservation laws was established recently by Olsson and Oliger (1994) and Olsson (1995) and was applied to the two-dimensional compressible Euler equations for a perfect gas by Gerritsen and Olsson (1996) and Gerritsen (1996). The basic building block in developing the stability estimate is a generalized energy approach based on a special splitting of the flux derivative via a convex entropy function and certain homogeneous properties. Due to some of the unique properties of the compressible Euler equations for a perfect gas, the splitting resulted in the sum of a conservative portion and a non-conservative portion of the flux derivative. hereafter referred to as the "Entropy Splitting." There are several potential desirable attributes and side benefits of the entropy splitting for the compressible Euler equations that were not fully explored in Gerritsen and Olsson. The paper has several objectives. The first is to investigate the choice of the arbitrary parameter that determines the amount of splitting and its dependence on the type of physics of current interest to computational fluid dynamics. The second is to investigate in what manner the splitting affects the nonlinear stability of the central schemes for long time integrations of unsteady flows such as in nonlinear aeroacoustics and turbulence dynamics. If numerical dissipation indeed is needed to stabilize the central scheme, can the splitting help minimize the numerical dissipation compared to its un-split cousin? Extensive numerical study on the vortex preservation capability of the splitting in conjunction with central schemes for long time integrations will be presented. The third is to study the effect of the non-conservative proportion of splitting in obtaining the correct shock location for high speed complex shock-turbulence interactions. The fourth is to determine if this method can be extended to other physical equations of state and other evolutionary equation sets. If numerical dissipation is needed, the Yee, Sandham, and Djomehri (1999) numerical dissipation is employed. The Yee et al. schemes fit in the Olsson and Oliger framework.
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.; Taflove, Allen; Garain, Sudip; Montecinos, Gino
2017-11-01
While classic finite-difference time-domain (FDTD) solutions of Maxwell's equations have served the computational electrodynamics (CED) community very well, formulations based on Godunov methodology have begun to show advantages. We argue that the formulations presented so far are such that FDTD schemes and Godunov-based schemes each have their own unique advantages. However, there is currently not a single formulation that systematically integrates the strengths of both these major strains of development. While an early glimpse of such a formulation was offered in Balsara et al. [16], that paper focused on electrodynamics in plasma. Here, we present a synthesis that integrates the strengths of both FDTD and Godunov-based schemes into a robust single formulation for CED in material media. Three advances make this synthesis possible. First, from the FDTD method, we retain (but somewhat modify) a spatial staggering strategy for the primal variables. This provides a beneficial constraint preservation for the electric displacement and magnetic induction vector fields via reconstruction methods that were initially developed in some of the first author's papers for numerical magnetohydrodynamics (MHD). Second, from the Godunov method, we retain the idea of upwinding, except that this idea, too, has to be significantly modified to use the multi-dimensionally upwinded Riemann solvers developed by the first author. Third, we draw upon recent advances in arbitrary derivatives in space and time (ADER) time-stepping by the first author and his colleagues. We use the ADER predictor step to endow our method with sub-cell resolving capabilities so that the method can be stiffly stable and resolve significant sub-cell variation in the material properties within a zone. Overall, in this paper, we report a new scheme for numerically solving Maxwell's equations in material media, with special attention paid to a second-order-accurate formulation. Several numerical examples are presented to show that the proposed technique works. Because of its sub-cell resolving ability, the new method retains second-order accuracy even when material permeability and permittivity vary by an order-of-magnitude over just one or two zones. Furthermore, because the new method is also unconditionally stable in the presence of stiff source terms (i.e., in problems involving giant conductivity variations), it can handle several orders-of-magnitude variation in material conductivity over just one or two zones without any reduction of the time-step. Consequently, the CFL depends only on the propagation speed of light in the medium being studied.
Characteristic-based algorithms for flows in thermo-chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Walters, Robert W.; Cinnella, Pasquale; Slack, David C.; Halt, David
1990-01-01
A generalized finite-rate chemistry algorithm with Steger-Warming, Van Leer, and Roe characteristic-based flux splittings is presented in three-dimensional generalized coordinates for the Navier-Stokes equations. Attention is placed on convergence to steady-state solutions with fully coupled chemistry. Time integration schemes including explicit m-stage Runge-Kutta, implicit approximate-factorization, relaxation and LU decomposition are investigated and compared in terms of residual reduction per unit of CPU time. Practical issues such as code vectorization and memory usage on modern supercomputers are discussed.
NASA Astrophysics Data System (ADS)
Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.
2014-11-01
The non-hydrostatic (NH) compressible Euler equations for dry atmosphere were solved in a simplified two-dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. By using horizontal SEM, which decomposes the physical domain into smaller pieces with a small communication stencil, a high level of scalability can be achieved. By using vertical FDM, an easy method for coupling the dynamics and existing physics packages can be provided. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind-biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative and integral terms. For temporal integration, a time-split, third-order Runge-Kutta (RK3) integration technique was applied. The Euler equations that were used here are in flux form based on the hydrostatic pressure vertical coordinate. The equations are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate was implemented in this model. We validated the model by conducting the widely used standard tests: linear hydrostatic mountain wave, tracer advection, and gravity wave over the Schär-type mountain, as well as density current, inertia-gravity wave, and rising thermal bubble. The results from these tests demonstrated that the model using the horizontal SEM and the vertical FDM is accurate and robust provided sufficient diffusion is applied. The results with various horizontal resolutions also showed convergence of second-order accuracy due to the accuracy of the time integration scheme and that of the vertical direction, although high-order basis functions were used in the horizontal. By using the 2-D slice model, we effectively showed that the combined spatial discretization method of the spectral element and finite difference methods in the horizontal and vertical directions, respectively, offers a viable method for development of an NH dynamical core.
2014-01-01
Background Nigeria has included a regulated community-based health insurance (CBHI) model within its National Health Insurance Scheme (NHIS). Uptake to date has been disappointing, however. The aim of this study is to review the present status of CBHI in SSA in general to highlight the issues that affect its successful integration within the NHIS of Nigeria and more widely in developing countries. Methods A literature survey using PubMed and EconLit was carried out to identify and review studies that report factors affecting implementation of CBHI in SSA with a focus on Nigeria. Results CBHI schemes with a variety of designs have been introduced across SSA but with generally disappointing results so far. Two exceptions are Ghana and Rwanda, both of which have introduced schemes with effective government control and support coupled with intensive implementation programmes. Poor support for CBHI is repeatedly linked elsewhere with failure to engage and account for the ‘real world’ needs of beneficiaries, lack of clear legislative and regulatory frameworks, inadequate financial support, and unrealistic enrolment requirements. Nigeria’s CBHI-type schemes for the informal sectors of its NHIS have been set up under an appropriate legislative framework, but work is needed to eliminate regressive financing, to involve scheme members in the setting up and management of programmes, to inform and educate more effectively, to eliminate lack of confidence in the schemes, and to address inequity in provision. Targeted subsidies should also be considered. Conclusions Disappointing uptake of CBHI-type NHIS elements in Nigeria can be addressed through closer integration of informal and formal programmes under the NHIS umbrella, with increasing involvement of beneficiaries in scheme design and management, improved communication and education, and targeted financial assistance. PMID:24559409
Optimised effective potential for ground states, excited states, and time-dependent phenomena
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross, E.K.U.
1996-12-31
(1) The optimized effective potential method is a variant of the traditional Kohn-Sham scheme. In this variant, the exchange-correlation energy E{sub xc} is an explicit functional of single-particle orbitals. The exchange-correlation potential, given as usual by the functional derivative v{sub xc} = {delta}E{sub xc}/{delta}{rho}, then satisfies as integral equation involving the single-particle orbitals. This integral equation in solved semi-analytically using a scheme recently proposed by Krieger, Li and Iafrate. If the exact (Fock) exchange-energy functional is employed together with the Colle-Salvetti orbital functional for the correlation energy, the mean absolute deviation of the resulting ground-state energies from the exact nonrelativisticmore » values is CT mH for the first-row atoms, as compared to 4.5 mH in a state-of-the-art CI calculation. The proposed scheme is thus significantly more accurate than the conventional Kohn-Sham method while the numerical effort involved is about the same as for an ordinary Hanree-Fock calculation. (2) A time-dependent generalization of the optimized-potential method is presented and applied to the linear-response regime. Since time-dependent density functional theory leads to a formally exact representation of the frequency-dependent linear density response and since the latter, as a function of frequency, has poles at the excitation energies of the fully interacting system, the formalism is suitable for the calculation of excitation energies. A simple additive correction to the Kohn-Sham single-particle excitation energies will be deduced and first results for atomic and molecular singlet and triplet excitation energies will be presented. (3) Beyond the regime of linear response, the time-dependent optimized-potential method is employed to describe atoms in strong emtosecond laser pulses. Ionization yields and harmonic spectra will be presented and compared with experimental data.« less
An authentication scheme to healthcare security under wireless sensor networks.
Hsiao, Tsung-Chih; Liao, Yu-Ting; Huang, Jen-Yan; Chen, Tzer-Shyong; Horng, Gwo-Boa
2012-12-01
In recent years, Taiwan has been seeing an extension of the average life expectancy and a drop in overall fertility rate, initiating our country into an aged society. Due to this phenomenon, how to provide the elderly and patients with chronic diseases a suitable healthcare environment has become a critical issue presently. Therefore, we propose a new scheme that integrates healthcare services with wireless sensor technology in which sensor nodes are employed to measure patients' vital signs. Data collected from these sensor nodes are then transmitted to mobile devices of the medical staff and system administrator, promptly enabling them to understand the patients' condition in real time, which will significantly improve patients' healthcare quality. As per the personal data protection act, patients' vital signs can only be accessed by authorized medical staff. In order to protect patients', the system administrator will verify the medical staff's identity through the mobile device using a smart card and password mechanism. Accordingly, only the verified medical staff can obtain patients' vital signs data such as their blood pressure, pulsation, and body temperature, etc.. Besides, the scheme includes a time-bounded characteristic that allows the verified staff access to data without having to have to re-authenticate and re-login into the system within a set period of time. Consequently, the time-bounded property also increases the work efficiency of the system administrator and user.
Hu, Yang; Li, Decai; Shu, Shi; Niu, Xiaodong
2016-02-01
Based on the Darcy-Brinkman-Forchheimer equation, a finite-volume computational model with lattice Boltzmann flux scheme is proposed for incompressible porous media flow in this paper. The fluxes across the cell interface are calculated by reconstructing the local solution of the generalized lattice Boltzmann equation for porous media flow. The time-scaled midpoint integration rule is adopted to discretize the governing equation, which makes the time step become limited by the Courant-Friedricks-Lewy condition. The force term which evaluates the effect of the porous medium is added to the discretized governing equation directly. The numerical simulations of the steady Poiseuille flow, the unsteady Womersley flow, the circular Couette flow, and the lid-driven flow are carried out to verify the present computational model. The obtained results show good agreement with the analytical, finite-difference, and/or previously published solutions.
NASA Astrophysics Data System (ADS)
Chen, Guangye; Chacón, Luis; CoCoMans Team
2014-10-01
For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.
77 FR 27832 - Shipping Coordinating Committee; Notice of Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
... Scheme --Integration of women in the maritime sector --Global maritime training institutions --Impact... financial sustainability of the Organization --Voluntary IMO Member State Audit Scheme --Consideration of...
The use of staggered scheme and an absorbing buffer zone for computational aeroacoustics
NASA Technical Reports Server (NTRS)
Nark, Douglas M.
1995-01-01
Various problems from those proposed for the Computational Aeroacoustics (CAA) workshop were studied using second and fourth order staggered spatial discretizations in conjunction with fourth order Runge-Kutta time integration. In addition, an absorbing buffer zone was used at the outflow boundaries. Promising results were obtained and provide a basis for application of these techniques to a wider variety of problems.
Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation
NASA Astrophysics Data System (ADS)
Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji
2018-04-01
In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.
Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.
Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji
2018-04-28
In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.
Numerical simulation of the fluid-structure interaction between air blast waves and soil structure
NASA Astrophysics Data System (ADS)
Umar, S.; Risby, M. S.; Albert, A. Luthfi; Norazman, M.; Ariffin, I.; Alias, Y. Muhamad
2014-03-01
Normally, an explosion threat on free field especially from high explosives is very dangerous due to the ground shocks generated that have high impulsive load. Nowadays, explosion threats do not only occur in the battlefield, but also in industries and urban areas. In industries such as oil and gas, explosion threats may occur on logistic transportation, maintenance, production, and distribution pipeline that are located underground to supply crude oil. Therefore, the appropriate blast resistances are a priority requirement that can be obtained through an assessment on the structural response, material strength and impact pattern of material due to ground shock. A highly impulsive load from ground shocks is a dynamic load due to its loading time which is faster than ground response time. Of late, almost all blast studies consider and analyze the ground shock in the fluid-structure interaction (FSI) because of its influence on the propagation and interaction of ground shock. Furthermore, analysis in the FSI integrates action of ground shock and reaction of ground on calculations of velocity, pressure and force. Therefore, this integration of the FSI has the capability to deliver the ground shock analysis on simulation to be closer to experimental investigation results. In this study, the FSI was implemented on AUTODYN computer code by using Euler-Godunov and the arbitrary Lagrangian-Eulerian (ALE). Euler-Godunov has the capability to deliver a structural computation on a 3D analysis, while ALE delivers an arbitrary calculation that is appropriate for a FSI analysis. In addition, ALE scheme delivers fine approach on little deformation analysis with an arbitrary motion, while the Euler-Godunov scheme delivers fine approach on a large deformation analysis. An integrated scheme based on Euler-Godunov and the arbitrary Lagrangian-Eulerian allows us to analyze the blast propagation waves and structural interaction simultaneously.
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.
Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Chaudhri, Anuj; Lukes, Jennifer R.
2010-02-01
The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.
NASA Astrophysics Data System (ADS)
Kadhem, Hasan; Amagasa, Toshiyuki; Kitagawa, Hiroyuki
Encryption can provide strong security for sensitive data against inside and outside attacks. This is especially true in the “Database as Service” model, where confidentiality and privacy are important issues for the client. In fact, existing encryption approaches are vulnerable to a statistical attack because each value is encrypted to another fixed value. This paper presents a novel database encryption scheme called MV-OPES (Multivalued — Order Preserving Encryption Scheme), which allows privacy-preserving queries over encrypted databases with an improved security level. Our idea is to encrypt a value to different multiple values to prevent statistical attacks. At the same time, MV-OPES preserves the order of the integer values to allow comparison operations to be directly applied on encrypted data. Using calculated distance (range), we propose a novel method that allows a join query between relations based on inequality over encrypted values. We also present techniques to offload query execution load to a database server as much as possible, thereby making a better use of server resources in a database outsourcing environment. Our scheme can easily be integrated with current database systems as it is designed to work with existing indexing structures. It is robust against statistical attack and the estimation of true values. MV-OPES experiments show that security for sensitive data can be achieved with reasonable overhead, establishing the practicability of the scheme.
Zhao, Zhenguo; Shi, Wenbo
2014-01-01
Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.'s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications. PMID:25025083
Yang, L M; Shu, C; Wang, Y
2016-03-01
In this work, a discrete gas-kinetic scheme (DGKS) is presented for simulation of two-dimensional viscous incompressible and compressible flows. This scheme is developed from the circular function-based GKS, which was recently proposed by Shu and his co-workers [L. M. Yang, C. Shu, and J. Wu, J. Comput. Phys. 274, 611 (2014)]. For the circular function-based GKS, the integrals for conservation forms of moments in the infinity domain for the Maxwellian function-based GKS are simplified to those integrals along the circle. As a result, the explicit formulations of conservative variables and fluxes are derived. However, these explicit formulations of circular function-based GKS for viscous flows are still complicated, which may not be easy for the application by new users. By using certain discrete points to represent the circle in the phase velocity space, the complicated formulations can be replaced by a simple solution process. The basic requirement is that the conservation forms of moments for the circular function-based GKS can be accurately satisfied by weighted summation of distribution functions at discrete points. In this work, it is shown that integral quadrature by four discrete points on the circle, which forms the D2Q4 discrete velocity model, can exactly match the integrals. Numerical results showed that the present scheme can provide accurate numerical results for incompressible and compressible viscous flows with roughly the same computational cost as that needed by the Roe scheme.
Computing thermal Wigner densities with the phase integration method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beutier, J.; Borgis, D.; Vuilleumier, R.
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
NASA Astrophysics Data System (ADS)
Choudhury, Devanil; Das, Someshwar
2017-06-01
The Advanced Research WRF (ARW) model is used to simulate Very Severe Cyclonic Storms (VSCS) Hudhud (7-13 October, 2014), Phailin (8-14 October, 2013) and Lehar (24-29 November, 2013) to investigate the sensitivity to microphysical schemes on the skill of forecasting track and intensity of the tropical cyclones for high-resolution (9 and 3 km) 120-hr model integration. For cloud resolving grid scale (<5 km) cloud microphysics plays an important role. The performance of the Goddard, Thompson, LIN and NSSL schemes are evaluated and compared with observations and a CONTROL forecast. This study is aimed to investigate the sensitivity to microphysics on the track and intensity with explicitly resolved convection scheme. It shows that the Goddard one-moment bulk liquid-ice microphysical scheme provided the highest skill on the track whereas for intensity both Thompson and Goddard microphysical schemes perform better. The Thompson scheme indicates the highest skill in intensity at 48, 96 and 120 hr, whereas at 24 and 72 hr, the Goddard scheme provides the highest skill in intensity. It is known that higher resolution domain produces better intensity and structure of the cyclones and it is desirable to resolve the convection with sufficiently high resolution and with the use of explicit cloud physics. This study suggests that the Goddard cumulus ensemble microphysical scheme is suitable for high resolution ARW simulation for TC's track and intensity over the BoB. Although the present study is based on only three cyclones, it could be useful for planning real-time predictions using ARW modelling system.
Deng, Yong-Yuan; Chen, Chin-Ling; Tsaur, Woei-Jiunn; Tang, Yung-Wen; Chen, Jung-Hsuan
2017-01-01
As sensor networks and cloud computation technologies have rapidly developed over recent years, many services and applications integrating these technologies into daily life have come together as an Internet of Things (IoT). At the same time, aging populations have increased the need for expanded and more efficient elderly care services. Fortunately, elderly people can now wear sensing devices which relay data to a personal wireless device, forming a body area network (BAN). These personal wireless devices collect and integrate patients’ personal physiological data, and then transmit the data to the backend of the network for related diagnostics. However, a great deal of the information transmitted by such systems is sensitive data, and must therefore be subject to stringent security protocols. Protecting this data from unauthorized access is thus an important issue in IoT-related research. In regard to a cloud healthcare environment, scholars have proposed a secure mechanism to protect sensitive patient information. Their schemes provide a general architecture; however, these previous schemes still have some vulnerability, and thus cannot guarantee complete security. This paper proposes a secure and lightweight body-sensor network based on the Internet of Things for cloud healthcare environments, in order to address the vulnerabilities discovered in previous schemes. The proposed authentication mechanism is applied to a medical reader to provide a more comprehensive architecture while also providing mutual authentication, and guaranteeing data integrity, user untraceability, and forward and backward secrecy, in addition to being resistant to replay attack. PMID:29244776
Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition
NASA Technical Reports Server (NTRS)
Kenwright, David; Lane, David
1995-01-01
An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Blanchard, D. K.
1975-01-01
A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.
Multiple time step integrators in ab initio molecular dynamics.
Luehr, Nathan; Markland, Thomas E; Martínez, Todd J
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
Lu, Hai-Han; Li, Chung-Yi; Chen, Hwan-Wei; Ho, Chun-Ming; Cheng, Ming-Te; Huang, Sheng-Jhe; Yang, Zih-Yi; Lin, Xin-Yao
2016-07-25
A bidirectional fiber-wireless and fiber-invisible laser light communication (IVLLC) integrated system that employs polarization-orthogonal modulation scheme for hybrid cable television (CATV)/microwave (MW)/millimeter-wave (MMW)/baseband (BB) signal transmission is proposed and demonstrated. To our knowledge, it is the first one that adopts a polarization-orthogonal modulation scheme in a bidirectional fiber-wireless and fiber-IVLLC integrated system with hybrid CATV/MW/MMW/BB signal. For downlink transmission, carrier-to-noise ratio (CNR), composite second-order (CSO), composite triple-beat (CTB), and bit error rate (BER) perform well over 40-km single-mode fiber (SMF) and 10-m RF/50-m optical wireless transport scenarios. For uplink transmission, good BER performance is obtained over 40-km SMF and 50-m optical wireless transport scenario. Such a bidirectional fiber-wireless and fiber-IVLLC integrated system for hybrid CATV/MW/MMW/BB signal transmission will be an attractive alternative for providing broadband integrated services, including CATV, Internet, and telecommunication services. It is shown to be a prominent one to present the advancements for the convergence of fiber backbone and RF/optical wireless feeder.
NASA Astrophysics Data System (ADS)
Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi
2017-08-01
A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added-mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this first part of a two-part series, the properties of the AMP scheme are motivated and evaluated through the development and analysis of some model problems. The analysis shows when and why the traditional partitioned scheme becomes unstable due to either added-mass or added-damping effects. The analysis also identifies the proper form of the added-damping which depends on the discrete time-step and the grid-spacing normal to the rigid body. The results of the analysis are confirmed with numerical simulations that also demonstrate a second-order accurate implementation of the AMP scheme.
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping
2017-12-01
In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.
Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna
2016-09-13
Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scalemore » computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.« less
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation. PMID:26981584
NASA Astrophysics Data System (ADS)
Jridi, Maher; Alfalou, Ayman
2018-03-01
In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.
Stabilized linear semi-implicit schemes for the nonlocal Cahn-Hilliard equation
NASA Astrophysics Data System (ADS)
Du, Qiang; Ju, Lili; Li, Xiao; Qiao, Zhonghua
2018-06-01
Comparing with the well-known classic Cahn-Hilliard equation, the nonlocal Cahn-Hilliard equation is equipped with a nonlocal diffusion operator and can describe more practical phenomena for modeling phase transitions of microstructures in materials. On the other hand, it evidently brings more computational costs in numerical simulations, thus efficient and accurate time integration schemes are highly desired. In this paper, we propose two energy-stable linear semi-implicit methods with first and second order temporal accuracies respectively for solving the nonlocal Cahn-Hilliard equation. The temporal discretization is done by using the stabilization technique with the nonlocal diffusion term treated implicitly, while the spatial discretization is carried out by the Fourier collocation method with FFT-based fast implementations. The energy stabilities are rigorously established for both methods in the fully discrete sense. Numerical experiments are conducted for a typical case involving Gaussian kernels. We test the temporal convergence rates of the proposed schemes and make a comparison of the nonlocal phase transition process with the corresponding local one. In addition, long-time simulations of the coarsening dynamics are also performed to predict the power law of the energy decay.
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment.
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation.
A new performance index for the repetitive motion of mobile manipulators.
Xiao, Lin; Zhang, Yunong
2014-02-01
A mobile manipulator is a robotic device composed of a mobile platform and a stationary manipulator fixed to the platform. To achieve the repetitive motion control of mobile manipulators, the mobile platform and the manipulator have to realize the repetitive motion simultaneously. To do so, a novel quadratic performance index is, for the first time, designed and presented in this paper, of which the effectiveness is analyzed by following a neural dynamics method. Then, a repetitive motion scheme is proposed by combining the criterion, physical constraints, and integrated kinematical equations of mobile manipulators, which is further reformulated as a quadratic programming (QP) subject to equality and bound constraints. In addition, two important Bridge theorems are established to prove that such a QP can be converted equivalently into a linear variational inequality, and then equivalently into a piecewise-linear projection equation (PLPE). A real-time numerical algorithm based on PLPE is thus developed and applied for the online solution of the resultant QP. Two tracking-path tasks demonstrate the effectiveness and accuracy of the repetitive motion scheme. In addition, comparisons between the nonrepetitive and repetitive motion further validate the superiority and novelty of the proposed scheme.
A framework for implementing data services in multi-service mobile satellite systems
NASA Technical Reports Server (NTRS)
Ali, Mohammed O.; Leung, Victor C. M.; Spolsky, Andrew I.
1988-01-01
Mobile satellite systems being planned for introduction in the early 1990s are expected to be invariably of the multi-service type. Mobile Telephone Service (MTS), Mobile Radio Service (MRS), and Mobile Data Service (MDS) are the major classifications used to categorize the many user applications to be supported. The MTS and MRS services encompass circuit-switched voice communication applications, and may be efficiently implemented using a centralized Demand-Assigned Multiple Access (DAMA) scheme. Applications under the MDS category are, on the other hand, message-oriented and expected to vary widely in characteristics; from simplex mode short messaging applications to long duration, full-duplex interactive data communication and large file transfer applications. For some applications under this service category, the conventional circuit-based DAMA scheme may prove highly inefficient due to the long time required to set up and establish communication links relative to the actual message transmission time. It is proposed that by defining a set of basic bearer services to be supported in MDS and optimizing their transmission and access schemes independent of the MTS and MRS services, the MDS applications can be more efficiently integrated into the multi-service design of mobile satellite systems.
A framework for implementing data services in multi-service mobile satellite systems
NASA Astrophysics Data System (ADS)
Ali, Mohammed O.; Leung, Victor C. M.; Spolsky, Andrew I.
1988-05-01
Mobile satellite systems being planned for introduction in the early 1990s are expected to be invariably of the multi-service type. Mobile Telephone Service (MTS), Mobile Radio Service (MRS), and Mobile Data Service (MDS) are the major classifications used to categorize the many user applications to be supported. The MTS and MRS services encompass circuit-switched voice communication applications, and may be efficiently implemented using a centralized Demand-Assigned Multiple Access (DAMA) scheme. Applications under the MDS category are, on the other hand, message-oriented and expected to vary widely in characteristics; from simplex mode short messaging applications to long duration, full-duplex interactive data communication and large file transfer applications. For some applications under this service category, the conventional circuit-based DAMA scheme may prove highly inefficient due to the long time required to set up and establish communication links relative to the actual message transmission time. It is proposed that by defining a set of basic bearer services to be supported in MDS and optimizing their transmission and access schemes independent of the MTS and MRS services, the MDS applications can be more efficiently integrated into the multi-service design of mobile satellite systems.
On processed splitting methods and high-order actions in path-integral Monte Carlo simulations.
Casas, Fernando
2010-10-21
Processed splitting methods are particularly well adapted to carry out path-integral Monte Carlo (PIMC) simulations: since one is mainly interested in estimating traces of operators, only the kernel of the method is necessary to approximate the thermal density matrix. Unfortunately, they suffer the same drawback as standard, nonprocessed integrators: kernels of effective order greater than two necessarily involve some negative coefficients. This problem can be circumvented, however, by incorporating modified potentials into the composition, thus rendering schemes of higher effective order. In this work we analyze a family of fourth-order schemes recently proposed in the PIMC setting, paying special attention to their linear stability properties, and justify their observed behavior in practice. We also propose a new fourth-order scheme requiring the same computational cost but with an enlarged stability interval.
Effects of integration time on in-water radiometric profiles.
D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito
2018-03-05
This work investigates the effects of integration time on in-water downward irradiance E d , upward irradiance E u and upwelling radiance L u profile data acquired with free-fall hyperspectral systems. Analyzed quantities are the subsurface value and the diffuse attenuation coefficient derived by applying linear and non-linear regression schemes. Case studies include oligotrophic waters (Case-1), as well as waters dominated by Colored Dissolved Organic Matter (CDOM) and Non-Algal Particles (NAP). Assuming a 24-bit digitization, measurements resulting from the accumulation of photons over integration times varying between 8 and 2048ms are evaluated at depths corresponding to: 1) the beginning of each integration interval (Fst); 2) the end of each integration interval (Lst); 3) the averages of Fst and Lst values (Avg); and finally 4) the values weighted accounting for the diffuse attenuation coefficient of water (Wgt). Statistical figures show that the effects of integration time can bias results well above 5% as a function of the depth definition. Results indicate the validity of the Wgt depth definition and the fair applicability of the Avg one. Instead, both the Fst and Lst depths should not be adopted since they may introduce pronounced biases in E u and L u regression products for highly absorbing waters. Finally, the study reconfirms the relevance of combining multiple radiometric casts into a single profile to increase precision of regression products.
NASA Astrophysics Data System (ADS)
Caplan, R. M.
2013-04-01
We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.
An annular superposition integral for axisymmetric radiators
Kelly, James F.; McGough, Robert J.
2007-01-01
A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a “smooth piston” function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity. PMID:17348500
78 FR 32698 - Shipping Coordinating Committee; Notice of Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-31
... --Partnerships for progress --Voluntary IMO Member State Audit Scheme --Integration of women in the maritime... Member State Audit Scheme --Consideration of the report of the Maritime Safety Committee --Consideration...
Upon Generating (2+1)-dimensional Dynamical Systems
NASA Astrophysics Data System (ADS)
Zhang, Yufeng; Bai, Yang; Wu, Lixin
2016-06-01
Under the framework of the Adler-Gel'fand-Dikii(AGD) scheme, we first propose two Hamiltonian operator pairs over a noncommutative ring so that we construct a new dynamical system in 2+1 dimensions, then we get a generalized special Novikov-Veselov (NV) equation via the Manakov triple. Then with the aid of a special symmetric Lie algebra of a reductive homogeneous group G, we adopt the Tu-Andrushkiw-Huang (TAH) scheme to generate a new integrable (2+1)-dimensional dynamical system and its Hamiltonian structure, which can reduce to the well-known (2+1)-dimensional Davey-Stewartson (DS) hierarchy. Finally, we extend the binormial residue representation (briefly BRR) scheme to the super higher dimensional integrable hierarchies with the help of a super subalgebra of the super Lie algebra sl(2/1), which is also a kind of symmetric Lie algebra of the reductive homogeneous group G. As applications, we obtain a super 2+1 dimensional MKdV hierarchy which can be reduced to a super 2+1 dimensional generalized AKNS equation. Finally, we compare the advantages and the shortcomings for the three schemes to generate integrable dynamical systems.
Gröbner Bases and Generation of Difference Schemes for Partial Differential Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.; Mozzhilkin, Vladimir V.
2006-05-01
In this paper we present an algorithmic approach to the generation of fully conservative difference schemes for linear partial differential equations. The approach is based on enlargement of the equations in their integral conservation law form by extra integral relations between unknown functions and their derivatives, and on discretization of the obtained system. The structure of the discrete system depends on numerical approximation methods for the integrals occurring in the enlarged system. As a result of the discretization, a system of linear polynomial difference equations is derived for the unknown functions and their partial derivatives. A difference scheme is constructed by elimination of all the partial derivatives. The elimination can be achieved by selecting a proper elimination ranking and by computing a Gröbner basis of the linear difference ideal generated by the polynomials in the discrete system. For these purposes we use the difference form of Janet-like Gröbner bases and their implementation in Maple. As illustration of the described methods and algorithms, we construct a number of difference schemes for Burgers and Falkowich-Karman equations and discuss their numerical properties.
Computational Aeroacoustics by the Space-time CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2001-01-01
In recent years, a new numerical methodology for conservation laws-the Space-Time Conservation Element and Solution Element Method (CE/SE), was developed by Dr. Chang of NASA Glenn Research Center and collaborators. In nature, the new method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its rigorous treatment of the fluxes and geometry, it is different from the existing schemes. The CE/SE scheme features: (1) space and time treated on the same footing, the integral equations of conservation laws are solve( for with second order accuracy, (2) high resolution, low dispersion and low dissipation, (3) novel, truly multi-dimensional, simple but effective non-reflecting boundary condition, (4) effortless implementation of computation, no numerical fix or parameter choice is needed, an( (5) robust enough to cover a wide spectrum of compressible flow: from weak linear acoustic waves to strong, discontinuous waves (shocks) appropriate for linear and nonlinear aeroacoustics. Currently, the CE/SE scheme has been developed to such a stage that a 3-13 unstructured CE/SE Navier-Stokes solver is already available. However, in the present paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen as a prototype and is sketched in Section 2. Then applications of the CE/SE scheme to linear, nonlinear aeroacoustics and airframe noise are depicted in Sections 3, 4, and 5 respectively to demonstrate its robustness and capability.
A new method of enhancing telecommand security: the application of GCM in TC protocol
NASA Astrophysics Data System (ADS)
Zhang, Lei; Tang, Chaojing; Zhang, Quan
2007-11-01
In recent times, security has grown to a topic of major importance for the space missions. Many space agencies have been engaged in research on the selection of proper algorithms for ensuring Telecommand security according to the space communication environment, especially in regard to the privacy and authentication. Since space missions with high security levels need to ensure both privacy and authentication, Authenticated Encryption with Associated Data schemes (AEAD) be integrated into normal Telecommand protocols. This paper provides an overview of the Galois Counter Mode (GCM) of operation, which is one of the available two-pass AEAD schemes, and some preliminary considerations and analyses about its possible application to Telecommand frames specified by CCSDS.
NASA Technical Reports Server (NTRS)
Buchard, Virginie; Da Silva, Arlindo; Todling, Ricardo
2017-01-01
In the GEOS near real-time system, as well as in MERRA-2 which is the latest reanalysis produced at NASAs Global Modeling and Assimilation Office(GMAO), the assimilation of aerosol observations is performed by means of a so-called analysis splitting method. In line with the transition of the GEOS meteorological data assimilation system to a hybrid Ensemble-Variational formulation, we are updating the aerosol component of our assimilation system to an ensemble square root filter(EnSRF; Whitaker and Hamill (2002)) type of scheme.We present a summary of our preliminary results of the assimilation of column integrated aerosol observations (Aerosol Optical Depth; AOD) using an Ensemble Square Root Filters (EnSRF) scheme and the ensemble members produced routinely by the meteorological assimilation.
Zou, Bin; Jiang, Xiaolu; Duan, Xiaoli; Zhao, Xiuge; Zhang, Jing; Tang, Jingwen; Sun, Guoqing
2017-03-23
Traditional sampling for soil pollution evaluation is cost intensive and has limited representativeness. Therefore, developing methods that can accurately and rapidly identify at-risk areas and the contributing pollutants is imperative for soil remediation. In this study, we propose an innovative integrated H-G scheme combining human health risk assessment and geographical detector methods that was based on geographical information system technology and validated its feasibility in a renewable resource industrial park in mainland China. With a discrete site investigation of cadmium (Cd), arsenic (As), copper (Cu), mercury (Hg) and zinc (Zn) concentrations, the continuous surfaces of carcinogenic risk and non-carcinogenic risk caused by these heavy metals were estimated and mapped. Source apportionment analysis using geographical detector methods further revealed that these risks were primarily attributed to As, according to the power of the determinant and its associated synergic actions with other heavy metals. Concentrations of critical As and Cd, and the associated exposed CRs are closed to the safe thresholds after remediating the risk areas identified by the integrated H-G scheme. Therefore, the integrated H-G scheme provides an effective approach to support decision-making for regional contaminated soil remediation at fine spatial resolution with limited sampling data over a large geographical extent.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
NASA Astrophysics Data System (ADS)
Wang, Lanjing; Shao, Wenjing; Wang, Zhiyue; Fu, Wenfeng; Zhao, Wensheng
2018-02-01
Taking the MEA chemical absorption carbon capture system with 85% of the carbon capture rate of a 660MW ultra-super critical unit as an example,this paper puts forward a new type of turbine which dedicated to supply steam to carbon capture system. The comparison of the thermal systems of the power plant under different steam supply schemes by using the EBSILON indicated optimal extraction scheme for Steam Extraction System in Carbon Capture System. The results show that the cycle heat efficiency of the unit introduced carbon capture turbine system is higher than that of the usual scheme without it. With the introduction of the carbon capture turbine, the scheme which extracted steam from high pressure cylinder’ s steam input point shows the highest cycle thermal efficiency. Its indexes are superior to other scheme, and more suitable for existing coal-fired power plant integrated post combustion carbon dioxide capture system.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Investigating students’ failure in fractional concept construction
NASA Astrophysics Data System (ADS)
Kurniawan, Henry; Sutawidjaja, Akbar; Rahman As’ari, Abdur; Muksar, Makbul; Setiawan, Iwan
2018-04-01
Failure is a failure to achieve goals. This failure occurs because a larger scheme integrates the schemes in mind that are related to the problem at hand. These schemes are integrated so that they are interconnected to form new structures. This new scheme structure is used to interpret the problems at hand. This research is a qualitative research done to trace student’s failure which happened in fractional concept construction. Subjects in this study as many as 2 students selected from 15 students with the consideration of these students meet the criteria that have been set into two groups that fail in solving the problem. Both groups, namely group 1 is a search group for the failure of students of S1 subject and group 2 is a search group for the failure of students of S2 subject.
Sensitivity of Age-of-Air Calculations to the Choice of Advection Scheme
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Hemler, Richard S.; Mahlman, Jerry D.; Bruhwiler, Lori; Takacs, Lawrence L.
2000-01-01
The age of air has recently emerged as a diagnostic of atmospheric transport unaffected by chemical parameterizations, and the features in the age distributions computed in models have been interpreted in terms of the models' large-scale circulation field. This study shows, however, that in addition to the simulated large-scale circulation, three-dimensional age calculations can also be affected by the choice of advection scheme employed in solving the tracer continuity equation, Specifically, using the 3.0deg latitude X 3.6deg longitude and 40 vertical level version of the Geophysical Fluid Dynamics Laboratory SKYHI GCM and six online transport schemes ranging from Eulerian through semi-Lagrangian to fully Lagrangian, it will be demonstrated that the oldest ages are obtained using the nondiffusive centered-difference schemes while the youngest ages are computed with a semi-Lagrangian transport (SLT) scheme. The centered- difference schemes are capable of producing ages older than 10 years in the mesosphere, thus eliminating the "young bias" found in previous age-of-air calculations. At this stage, only limited intuitive explanations can be advanced for this sensitivity of age-of-air calculations to the choice of advection scheme, In particular, age distributions computed online with the National Center for Atmospheric Research Community Climate Model (MACCM3) using different varieties of the SLT scheme are substantially older than the SKYHI SLT distribution. The different varieties, including a noninterpolating-in-the-vertical version (which is essentially centered-difference in the vertical), also produce a narrower range of age distributions than the suite of advection schemes employed in the SKYHI model. While additional MACCM3 experiments with a wider range of schemes would be necessary to provide more definitive insights, the older and less variable MACCM3 age distributions can plausibly be interpreted as being due to the semi-implicit semi-Lagrangian dynamics employed in the MACCM3. This type of dynamical core (employed with a 60-min time step) is likely to reduce SLT's interpolation errors that are compounded by the short-term variability characteristic of the explicit centered-difference dynamics employed in the SKYHI model (time step of 3 min). In the extreme case of a very slowly varying circulation, the choice of advection scheme has no effect on two-dimensional (latitude-height) age-of-air calculations, owing to the smooth nature of the transport circulation in 2D models. These results suggest that nondiffusive schemes may be the preferred choice for multiyear simulations of tracers not overly sensitive to the requirement of monotonicity (this category includes many greenhouse gases). At the same time, age-of-air calculations offer a simple quantitative diagnostic of a scheme's long-term diffusive properties and may help in the evaluation of dynamical cores in multiyear integrations. On the other hand, the sensitivity of the computed ages to the model numerics calls for caution in using age of air as a diagnostic of a GCM's large-scale circulation field.
Time-dependent spectral renormalization method
NASA Astrophysics Data System (ADS)
Cole, Justin T.; Musslimani, Ziad H.
2017-11-01
The spectral renormalization method was introduced by Ablowitz and Musslimani (2005) as an effective way to numerically compute (time-independent) bound states for certain nonlinear boundary value problems. In this paper, we extend those ideas to the time domain and introduce a time-dependent spectral renormalization method as a numerical means to simulate linear and nonlinear evolution equations. The essence of the method is to convert the underlying evolution equation from its partial or ordinary differential form (using Duhamel's principle) into an integral equation. The solution sought is then viewed as a fixed point in both space and time. The resulting integral equation is then numerically solved using a simple renormalized fixed-point iteration method. Convergence is achieved by introducing a time-dependent renormalization factor which is numerically computed from the physical properties of the governing evolution equation. The proposed method has the ability to incorporate physics into the simulations in the form of conservation laws or dissipation rates. This novel scheme is implemented on benchmark evolution equations: the classical nonlinear Schrödinger (NLS), integrable PT symmetric nonlocal NLS and the viscous Burgers' equations, each of which being a prototypical example of a conservative and dissipative dynamical system. Numerical implementation and algorithm performance are also discussed.
High-Speed Quantum Key Distribution Using Photonic Integrated Circuits
2013-01-01
protocol [14] that uses energy-time entanglement of pairs of photons. We are employing the QPIC architecture to implement a novel high-dimensional disper...continuous Hilbert spaces using measures of the covariance matrix. Although we focus the discussion on a scheme employing entangled photon pairs...is the probability that parameter estimation fails [20]. The parameter ε̄ accounts for the accuracy of estimating the smooth min- entropy , which
Transient Finite Element Computations on a Variable Transputer System
NASA Technical Reports Server (NTRS)
Smolinski, Patrick J.; Lapczyk, Ireneusz
1993-01-01
A parallel program to analyze transient finite element problems was written and implemented on a system of transputer processors. The program uses the explicit time integration algorithm which eliminates the need for equation solving, making it more suitable for parallel computations. An interprocessor communication scheme was developed for arbitrary two dimensional grid processor configurations. Several 3-D problems were analyzed on a system with a small number of processors.
1991-05-24
hardware data compressors. [BuBo89, BuBo90, BuBo9l] The data compression scheme of Ziv and Lempel repeatedly matches the input stream to words contained...most significantly reduce dictionary size requirements in practical Ziv - Lempel encoders, without compromising; compression . How- ever, the additional...achieve a fixed 20MB/sec data rate. Thus, our Ziv - Lempel implementation realizes a speed improvement of 10 to 20 times that of the fastest recent
Tension Cutoff and Parameter Identification for the Viscoplastic Cap Model.
1983-04-01
computer program "VPDRVR" which employs a Crank-Nicolson time integration scheme and a Newton-Raphson iterative solution procedure. Numerical studies were...parameters was illustrated for triaxial stress and uniaxial strain loading for a well- studied sand material (McCormick Ranch Sand). Lastly, a finite element...viscoplastic tension-cutoff cri- terion and to establish parameter identification techniques with experimental data. Herein lies the impetus of this study
Mishra, Dheerendra; Mukhopadhyay, Sourav; Chaturvedi, Ankita; Kumari, Saru; Khan, Muhammad Khurram
2014-06-01
Remote user authentication is desirable for a Telecare Medicine Information System (TMIS) for the safety, security and integrity of transmitted data over the public channel. In 2013, Tan presented a biometric based remote user authentication scheme and claimed that his scheme is secure. Recently, Yan et al. demonstrated some drawbacks in Tan's scheme and proposed an improved scheme to erase the drawbacks of Tan's scheme. We analyze Yan et al.'s scheme and identify that their scheme is vulnerable to off-line password guessing attack, and does not protect anonymity. Moreover, in their scheme, login and password change phases are inefficient to identify the correctness of input where inefficiency in password change phase can cause denial of service attack. Further, we design an improved scheme for TMIS with the aim to eliminate the drawbacks of Yan et al.'s scheme.
Carpintero, Guillermo; Hisatake, Shintaro; de Felipe, David; Guzman, Robinson; Nagatsuma, Tadao; Keil, Norbert
2018-02-14
We report for the first time the successful wavelength stabilization of two hybrid integrated InP/Polymer DBR lasers through optical injection. The two InP/Polymer DBR lasers are integrated into a photonic integrated circuit, providing an ideal source for millimeter and Terahertz wave generation by optical heterodyne technique. These lasers offer the widest tuning range of the carrier wave demonstrated to date up into the Terahertz range, about 20 nm (2.5 THz) on a single photonic integrated circuit. We demonstrate the application of this source to generate a carrier wave at 330 GHz to establish a wireless data transmission link at a data rate up to 18 Gbit/s. Using a coherent detection scheme we increase the sensitivity by more than 10 dB over direct detection.
NASA Astrophysics Data System (ADS)
Aziz, H. M. Abdul
Personal transport is a leading contributor to fossil fuel consumption and greenhouse (GHG) emissions in the U.S. The U.S. Energy Information Administration (EIA) reports that light-duty vehicles (LDV) are responsible for 61% of all transportation related energy consumption in 2012, which is equivalent to 8.4 million barrels of oil (fossil fuel) per day. The carbon content in fossil fuels is the primary source of GHG emissions that links to the challenge associated with climate change. Evidently, it is high time to develop actionable and innovative strategies to reduce fuel consumption and GHG emissions from the road transportation networks. This dissertation integrates the broader goal of minimizing energy and emissions into the transportation planning process using novel systems modeling approaches. This research aims to find, investigate, and evaluate strategies that minimize carbon-based fuel consumption and emissions for a transportation network. We propose user and system level strategies that can influence travel decisions and can reinforce pro-environmental attitudes of road users. Further, we develop strategies that system operators can implement to optimize traffic operations with emissions minimization goal. To complete the framework we develop an integrated traffic-emissions (EPA-MOVES) simulation framework that can assess the effectiveness of the strategies with computational efficiency and reasonable accuracy. The dissertation begins with exploring the trade-off between emissions and travel time in context of daily travel decisions and its heterogeneous nature. Data are collected from a web-based survey and the trade-off values indicating the average additional travel minutes a person is willing to consider for reducing a lb. of GHG emissions are estimated from random parameter models. Results indicate that different trade-off values for male and female groups. Further, participants from high-income households are found to have higher trade-off values compared with other groups. Next, we propose personal mobility carbon allowance (PMCA) scheme to reduce emissions from personal travel. PMCA is a market-based scheme that allocates carbon credits to users at no cost based on the emissions reduction goal of the system. Users can spend carbon credits for travel and a market place exists where users can buy or sell credits. This dissertation addresses two primary dimensions: the change in travel behavior of the users and the impact at network level in terms of travel time and emissions when PMCA is implemented. To understand this process, a real-time experimental game tool is developed where players are asked to make travel decisions within the carbon budget set by PMCA and they are allowed to trade carbon credits in a market modeled as a double auction game. Random parameter models are estimated to examine the impact of PMCA on short-term travel decisions. Further, to assess the impact at system level, a multi-class dynamic user equilibrium model is formulated that captures the travel behavior under PMCA scheme. The equivalent variational inequality problem is solved using projection method. Results indicate that PMCA scheme is able to reduce GHG emissions from transportation networks. Individuals with high value of travel time (VOTT) are less sensitive to PMCA scheme in context of work trips. High and medium income users are more likely to have non-work trips with lower carbon cost (higher travel time) to save carbon credits for work trips. Next, we focus on the strategies from the perspectives of system operators in transportation networks. Learning based signal control schemes are developed that can reduce emissions from signalized urban networks. The algorithms are implemented and tested in VISSIM micro simulator. Finally, an integrated emissions-traffic simulator framework is outlined that can be used to evaluate the effectiveness of the strategies. The integrated framework uses MOVES2010b as the emissions simulator. To estimate the emissions efficiently we propose a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the link driving schedules for MOVES2010b. Test results using the data from a five-intersection corridor show that HC-DTW technique can significantly reduce emissions estimation time without compromising the accuracy. The benefits are found to be most significant when the level of congestion variation is high. In addition to finding novel strategies for reducing emissions from transportation networks, this dissertation has broader impacts on behavior based energy policy design and transportation network modeling research. The trade-off values can be a useful indicator to identify which policies are most effective to reinforce pro-environmental travel choices. For instance, the model can estimate the distribution of trade-off between emissions and travel time, and provide insights on the effectiveness of policies for New York City if we are able to collect data to construct a representative sample. The probability of route choice decisions vary across population groups and trip contexts. The probability as a function of travel and demographic attributes can be used as behavior rules for agents in an agent-based traffic simulation. Finally, the dynamic user equilibrium based network model provides a general framework for energy policies such carbon tax, tradable permit, and emissions credits system.
Hybrid scheduling mechanisms for Next-generation Passive Optical Networks based on network coding
NASA Astrophysics Data System (ADS)
Zhao, Jijun; Bai, Wei; Liu, Xin; Feng, Nan; Maier, Martin
2014-10-01
Network coding (NC) integrated into Passive Optical Networks (PONs) is regarded as a promising solution to achieve higher throughput and energy efficiency. To efficiently support multimedia traffic under this new transmission mode, novel NC-based hybrid scheduling mechanisms for Next-generation PONs (NG-PONs) including energy management, time slot management, resource allocation, and Quality-of-Service (QoS) scheduling are proposed in this paper. First, we design an energy-saving scheme that is based on Bidirectional Centric Scheduling (BCS) to reduce the energy consumption of both the Optical Line Terminal (OLT) and Optical Network Units (ONUs). Next, we propose an intra-ONU scheduling and an inter-ONU scheduling scheme, which takes NC into account to support service differentiation and QoS assurance. The presented simulation results show that BCS achieves higher energy efficiency under low traffic loads, clearly outperforming the alternative NC-based Upstream Centric Scheduling (UCS) scheme. Furthermore, BCS is shown to provide better QoS assurance.
NASA Astrophysics Data System (ADS)
Ajiatmo, Dwi; Robandi, Imam
2017-03-01
This paper proposes a control scheme photovoltaic, battery and super capacitor connected in parallel for use in a solar vehicle. Based on the features of battery charging, the control scheme consists of three modes, namely, mode dynamic irradian, constant load mode and constant voltage charging mode. The shift of the three modes can be realized by controlling the duty cycle of the mosffet Boost converter system. Meanwhile, the high voltage which is more suitable for the application can be obtained. Compared with normal charging method with parallel connected current limiting detention and charging method with dynamic irradian mode, constant load mode and constant voltage charging mode, the control scheme is proposed to shorten the charging time and increase the use of power generated from the PV array. From the simulation results and analysis conducted to determine the performance of the system in state transient and steady-state by using simulation software Matlab / Simulink. Response simulation results demonstrate the suitability of the proposed concept.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2002-01-01
A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.
2016-08-01
The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.
Selimis, Georgios; Huang, Li; Massé, Fabien; Tsekoura, Ioanna; Ashouei, Maryam; Catthoor, Francky; Huisken, Jos; Stuyt, Jan; Dolmans, Guido; Penders, Julien; De Groot, Harmke
2011-10-01
In order for wireless body area networks to meet widespread adoption, a number of security implications must be explored to promote and maintain fundamental medical ethical principles and social expectations. As a result, integration of security functionality to sensor nodes is required. Integrating security functionality to a wireless sensor node increases the size of the stored software program in program memory, the required time that the sensor's microprocessor needs to process the data and the wireless network traffic which is exchanged among sensors. This security overhead has dominant impact on the energy dissipation which is strongly related to the lifetime of the sensor, a critical aspect in wireless sensor network (WSN) technology. Strict definition of the security functionality, complete hardware model (microprocessor and radio), WBAN topology and the structure of the medium access control (MAC) frame are required for an accurate estimation of the energy that security introduces into the WBAN. In this work, we define a lightweight security scheme for WBAN, we estimate the additional energy consumption that the security scheme introduces to WBAN based on commercial available off-the-shelf hardware components (microprocessor and radio), the network topology and the MAC frame. Furthermore, we propose a new microcontroller design in order to reduce the energy consumption of the system. Experimental results and comparisons with other works are given.
Multigrid Approach to Incompressible Viscous Cavity Flows
NASA Technical Reports Server (NTRS)
Wood, William A.
1996-01-01
Two-dimensional incompressible viscous driven-cavity flows are computed for Reynolds numbers on the range 100-20,000 using a loosely coupled, implicit, second-order centrally-different scheme. Mesh sequencing and three-level V-cycle multigrid error smoothing are incorporated into the symmetric Gauss-Seidel time-integration algorithm. Parametrics on the numerical parameters are performed, achieving reductions in solution times by more than 60 percent with the full multigrid approach. Details of the circulation patterns are investigated in cavities of 2-to-1, 1-to-1, and 1-to-2 depth to width ratios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, M V; Garanin, S G; Dolgopolov, Yu V
2014-11-30
A seven-channel fibre laser system operated by the master oscillator – multichannel power amplifier scheme is the phase locked using a stochastic parallel gradient algorithm. The phase modulators on lithium niobate crystals are controlled by a multichannel electronic unit with the microcontroller processing signals in real time. The dynamic phase locking of the laser system with the bandwidth of 14 kHz is demonstrated, the time of phasing is 3 – 4 ms. (fibre and integrated-optical structures)
Lu, Dan; Joshi, Amita; Wang, Bei; Olsen, Steve; Yi, Joo-Hee; Krop, Ian E; Burris, Howard A; Girish, Sandhya
2013-08-01
Trastuzumab emtansine (T-DM1) is an antibody-drug conjugate recently approved by the US Food and Drug Administration for the treatment of human epidermal growth factor receptor 2 (HER2)-positive metastatic breast cancer previously treated with trastuzumab and taxane chemotherapy. It comprises the microtubule inhibitory cytotoxic agent DM1 conjugated to the HER2-targeted humanized monoclonal antibody trastuzumab via a stable linker. To characterize the pharmacokinetics of T-DM1 in patients with metastatic breast cancer, concentrations of multiple analytes were quantified, including serum concentrations of T-DM1 conjugate and total trastuzumab (the sum of conjugated and unconjugated trastuzumab), as well as plasma concentrations of DM1. The clearance of T-DM1 conjugate is approximately 2 to 3 times faster than its parent antibody, trastuzumab. However, the clearance pathways accounting for this faster clearance rate are unclear. An integrated population pharmacokinetic model that simultaneously fits the pharmacokinetics of T-DM1 conjugate and total trastuzumab can help to elucidate the clearance pathways of T-DM1. The model can also be used to predict total trastuzumab pharmacokinetic profiles based on T-DM1 conjugate pharmacokinetic data and sparse total trastuzumab pharmacokinetic data, thereby reducing the frequency of pharmacokinetic sampling. T-DM1 conjugate and total trastuzumab serum concentration data, including baseline trastuzumab concentrations prior to T-DM1 treatment, from phase I and II studies were used to develop this integrated population pharmacokinetic model. Based on a hypothetical T-DM1 catabolism scheme, two-compartment models for T-DM1 conjugate and trastuzumab were integrated by assuming a one-step deconjugation clearance from T-DM1 conjugate to trastuzumab. The ability of the model to predict the total trastuzumab pharmacokinetic profile based on T-DM1 conjugate pharmacokinetics and various sampling schemes of total trastuzumab pharmacokinetics was assessed to evaluate total trastuzumab sampling schemes. The final model reflects a simplified catabolism scheme of T-DM1, suggesting that T-DM1 clearance pathways include both deconjugation and proteolytic degradation. The model fits T-DM1 conjugate and total trastuzumab pharmacokinetic data simultaneously. The deconjugation clearance of T-DM1 was estimated to be ~0.4 L/day. Proteolytic degradation clearances for T-DM1 and trastuzumab were similar (~0.3 L/day). This model accurately predicts total trastuzumab pharmacokinetic profiles based on T-DM1 conjugate pharmacokinetic data and sparse total trastuzumab pharmacokinetic data sampled at preinfusion and end of infusion in cycle 1, and in one additional steady state cycle. This semi-mechanistic integrated model links T-DM1 conjugate and total trastuzumab pharmacokinetic data, and supports the inclusion of both proteolytic degradation and deconjugation as clearance pathways in the hypothetical T-DM1 catabolism scheme. The model attributes a faster T-DM1 conjugate clearance versus that of trastuzumab to the presence of a deconjugation process and suggests a similar proteolytic clearance of T-DM1 and trastuzumab. Based on the model and T-DM1 conjugate pharmacokinetic data, a sparse pharmacokinetic sampling scheme for total trastuzumab provides an entire pharmacokinetic profile with similar predictive accuracy to that of a dense pharmacokinetic sampling scheme.
NASA Technical Reports Server (NTRS)
Cartier, D. E.
1976-01-01
This concise paper considers the effect on the autocorrelation function of a pseudonoise (PN) code when the acquisition scheme only integrates coherently over part of the code and then noncoherently combines these results. The peak-to-null ratio of the effective PN autocorrelation function is shown to degrade to the square root of n, where n is the number of PN symbols over which coherent integration takes place.
NASA Technical Reports Server (NTRS)
Gallagher, R. R.
1974-01-01
Exercise subroutine modifications are implemented in an exercise-respiratory system model yielding improvement of system response to exercise forcings. A more physiologically desirable respiratory ventilation rate in addition to an improved regulation of arterial gas tensions and cerebral blood flow is observed. A respiratory frequency expression is proposed which would be appropriate as an interfacing element of the respiratory-pulsatile cardiovascular system. Presentation of a circulatory-respiratory system integration scheme along with its computer program listing is given. The integrated system responds to exercise stimulation for both nonstressed and stressed physiological states. Other integration possibilities are discussed with respect to the respiratory, pulsatile cardiovascular, thermoregulatory, and the long-term circulatory systems.
Bai, Xiao-ping; Zhang, Xi-wei
2013-01-01
Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.
NASA Astrophysics Data System (ADS)
Bensiali, Bouchra; Bodi, Kowsik; Ciraolo, Guido; Ghendrih, Philippe; Liandrat, Jacques
2013-03-01
In this work, we compare different interpolation operators in the context of particle tracking with an emphasis on situations involving velocity field with steep gradients. Since, in this case, most classical methods give rise to the Gibbs phenomenon (generation of oscillations near discontinuities), we present new methods for particle tracking based on subdivision schemes and especially on the Piecewise Parabolic Harmonic (PPH) scheme which has shown its advantage in image processing in presence of strong contrasts. First an analytic univariate case with a discontinuous velocity field is considered in order to highlight the effect of the Gibbs phenomenon on trajectory calculation. Theoretical results are provided. Then, we show, regardless of the interpolation method, the need to use a conservative approach when integrating a conservative problem with a velocity field deriving from a potential. Finally, the PPH scheme is applied in a more realistic case of a time-dependent potential encountered in the edge turbulence of magnetically confined plasmas, to compare the propagation of density structures (turbulence bursts) with the dynamics of test particles. This study highlights the difference between particle transport and density transport in turbulent fields.
On regularizing the MCTDH equations of motion
NASA Astrophysics Data System (ADS)
Meyer, Hans-Dieter; Wang, Haobin
2018-03-01
The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.
An efficient blocking M2L translation for low-frequency fast multipole method in three dimensions
NASA Astrophysics Data System (ADS)
Takahashi, Toru; Shimba, Yuta; Isakari, Hiroshi; Matsumoto, Toshiro
2016-05-01
We propose an efficient scheme to perform the multipole-to-local (M2L) translation in the three-dimensional low-frequency fast multipole method (LFFMM). Our strategy is to combine a group of matrix-vector products associated with M2L translation into a matrix-matrix product in order to diminish the memory traffic. For this purpose, we first developed a grouping method (termed as internal blocking) based on the congruent transformations (rotational and reflectional symmetries) of M2L-translators for each target box in the FMM hierarchy (adaptive octree). Next, we considered another method of grouping (termed as external blocking) that was able to handle M2L translations for multiple target boxes collectively by using the translational invariance of the M2L translation. By combining these internal and external blockings, the M2L translation can be performed efficiently whilst preservingthe numerical accuracy exactly. We assessed the proposed blocking scheme numerically and applied it to the boundary integral equation method to solve electromagnetic scattering problems for perfectly electrical conductor. From the numerical results, it was found that the proposed M2L scheme achieved a few times speedup compared to the non-blocking scheme.
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Berry, J. A.; Collatz, G. J.; Field, C. B.; Hall, F. G.
1992-01-01
The theoretical analyses of Sellers (1985, 1987), which linked canopy spectral reflectance properties to (unstressed) photosynthetic rates and conductances, are critically reviewed and significant shortcomings are identified. These are addressed in this article principally through the incorporation of a more sophisticated and realistic treatment of leaf physiological processes within a new canopy integration scheme. The results indicate that area-averaged spectral vegetation indices, as obtained from coarse resolution satellite sensors, may give good estimates of the area-integrals of photosynthesis and conductance even for spatially heterogenous (though physiologically uniform) vegetation covers.
An integral equation-based numerical solver for Taylor states in toroidal geometries
NASA Astrophysics Data System (ADS)
O'Neil, Michael; Cerfon, Antoine J.
2018-04-01
We present an algorithm for the numerical calculation of Taylor states in toroidal and toroidal-shell geometries using an analytical framework developed for the solution to the time-harmonic Maxwell equations. Taylor states are a special case of what are known as Beltrami fields, or linear force-free fields. The scheme of this work relies on the generalized Debye source representation of Maxwell fields and an integral representation of Beltrami fields which immediately yields a well-conditioned second-kind integral equation. This integral equation has a unique solution whenever the Beltrami parameter λ is not a member of a discrete, countable set of resonances which physically correspond to spontaneous symmetry breaking. Several numerical examples relevant to magnetohydrodynamic equilibria calculations are provided. Lastly, our approach easily generalizes to arbitrary geometries, both bounded and unbounded, and of varying genus.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1987-01-01
Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.
A Dynamic Finite Element Method for Simulating the Physics of Faults Systems
NASA Astrophysics Data System (ADS)
Saez, E.; Mora, P.; Gross, L.; Weatherley, D.
2004-12-01
We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.
Nagy-Soper Subtraction: a Review
NASA Astrophysics Data System (ADS)
Robens, Tania
2013-07-01
In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.
Yoles-Frenkel, Michal; Kahan, Anat; Ben-Shaul, Yoram
2018-05-23
The vomeronasal system (VNS) is a major vertebrate chemosensory system that functions in parallel to the main olfactory system (MOS). Despite many similarities, the two systems dramatically differ in the temporal domain. While MOS responses are governed by breathing and follow a subsecond temporal scale, VNS responses are uncoupled from breathing and evolve over seconds. This suggests that the contribution of response dynamics to stimulus information will differ between these systems. While temporal dynamics in the MOS are widely investigated, similar analyses in the accessory olfactory bulb (AOB) are lacking. Here, we have addressed this issue using controlled stimulus delivery to the vomeronasal organ of male and female mice. We first analyzed the temporal properties of AOB projection neurons and demonstrated that neurons display prolonged, variable, and neuron-specific characteristics. We then analyzed various decoding schemes using AOB population responses. We showed that compared with the simplest scheme (i.e., integration of spike counts over the entire response period), the division of this period into smaller temporal bins actually yields poorer decoding accuracy. However, optimal classification accuracy can be achieved well before the end of the response period by integrating spike counts within temporally defined windows. Since VNS stimulus uptake is variable, we analyzed decoding using limited information about stimulus uptake time, and showed that with enough neurons, such time-invariant decoding is feasible. Finally, we conducted simulations that demonstrated that, unlike the main olfactory bulb, the temporal features of AOB neurons disfavor decoding with high temporal accuracy, and, rather, support decoding without precise knowledge of stimulus uptake time. SIGNIFICANCE STATEMENT A key goal in sensory system research is to identify which metrics of neuronal activity are relevant for decoding stimulus features. Here, we describe the first systematic analysis of temporal coding in the vomeronasal system (VNS), a chemosensory system devoted to socially relevant cues. Compared with the main olfactory system, timescales of VNS function are inherently slower and variable. Using various analyses of real and simulated data, we show that the consideration of response times relative to stimulus uptake can aid the decoding of stimulus information from neuronal activity. However, response properties of accessory olfactory bulb neurons favor decoding schemes that do not rely on the precise timing of stimulus uptake. Such schemes are consistent with the variable nature of VNS stimulus uptake. Copyright © 2018 the authors 0270-6474/18/384957-20$15.00/0.
NASA Technical Reports Server (NTRS)
Li, Wei; Saleeb, Atef F.
1995-01-01
This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of the report, we focus on the specific details of the numerical schemes, and associated computer algorithms, for the finite-element implementation of GVIPS and NAV models.
Propulsion system performance resulting from an integrated flight/propulsion control design
NASA Technical Reports Server (NTRS)
Mattern, Duane; Garg, Sanjay
1992-01-01
Propulsion-system-specific results are presented from the application of the integrated methodology for propulsion and airframe control (IMPAC) design approach to integrated flight/propulsion control design for a 'short takeoff and vertical landing' (STOVL) aircraft in transition flight. The IMPAC method is briefly discussed and the propulsion system specifications for the integrated control design are examined. The structure of a linear engine controller that results from partitioning a linear centralized controller is discussed. The details of a nonlinear propulsion control system are presented, including a scheme to protect the engine operational limits: the fan surge margin and the acceleration/deceleration schedule that limits the fuel flow. Also, a simple but effective multivariable integrator windup protection scheme is examined. Nonlinear closed-loop simulation results are presented for two typical pilot commands for transition flight: acceleration while maintaining flightpath angle and a change in flightpath angle while maintaining airspeed. The simulation nonlinearities include the airframe/engine coupling, the actuator and sensor dynamics and limits, the protection scheme for the engine operational limits, and the integrator windup protection. Satisfactory performance of the total airframe plus engine system for transition flight, as defined by the specifications, was maintained during the limit operation of the closed-loop engine subsystem.
Yang, Hui; Zhang, Jie; Ji, Yuefeng; Tian, Rui; Han, Jianrui; Lee, Young
2015-11-30
Data center interconnect with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resilience between IP and elastic optical networks that allows to accommodate data center services. In view of this, this study extends to consider the resource integration by breaking the limit of network device, which can enhance the resource utilization. We propose a novel multi-stratum resources integration (MSRI) architecture based on network function virtualization in software defined elastic data center optical interconnect. A resource integrated mapping (RIM) scheme for MSRI is introduced in the proposed architecture. The MSRI can accommodate the data center services with resources integration when the single function or resource is relatively scarce to provision the services, and enhance globally integrated optimization of optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of OpenFlow-based enhanced software defined networking (eSDN) testbed. The performance of RIM scheme under heavy traffic load scenario is also quantitatively evaluated based on MSRI architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning schemes.
Optimizing Cubature for Efficient Integration of Subspace Deformations
An, Steven S.; Kim, Theodore; James, Doug L.
2009-01-01
We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR Categories: I.6.8 [Simulation and Modeling]: Types of Simulation—Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis—Quadrature and Numerical Differentiation PMID:19956777
Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.
2016-12-01
The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187
Modelling wetting and drying effects over complex topography
NASA Astrophysics Data System (ADS)
Tchamen, G. W.; Kahawita, R. A.
1998-06-01
The numerical simulation of free surface flows that alternately flood and dry out over complex topography is a formidable task. The model equation set generally used for this purpose is the two-dimensional (2D) shallow water wave model (SWWM). Simplified forms of this system such as the zero inertia model (ZIM) can accommodate specific situations like slowly evolving floods over gentle slopes. Classical numerical techniques, such as finite differences (FD) and finite elements (FE), have been used for their integration over the last 20-30 years. Most of these schemes experience some kind of instability and usually fail when some particular domain under specific flow conditions is treated. The numerical instability generally manifests itself in the form of an unphysical negative depth that subsequently causes a run-time error at the computation of the celerity and/or the friction slope. The origins of this behaviour are diverse and may be generally attributed to:1. The use of a scheme that is inappropriate for such complex flow conditions (mixed regimes).2. Improper treatment of a friction source term or a large local curvature in topography.3. Mishandling of a cell that is partially wet/dry.In this paper, a tentative attempt has been made to gain a better understanding of the genesis of the instabilities, their implications and the limits to the proposed solutions. Frequently, the enforcement of robustness is made at the expense of accuracy. The need for a positive scheme, that is, a scheme that always predicts positive depths when run within the constraints of some practical stability limits, is fundamental. It is shown here how a carefully chosen scheme (in this case, an adaptation of the solver to the SWWM) can preserve positive values of water depth under both explicit and implicit time integration, high velocities and complex topography that may include dry areas. However, the treatment of the source terms: friction, Coriolis and particularly the bathymetry, are also of prime importance and must not be overlooked. Linearization with a combination of switching between explicit-implicit integration can overcome the stiffness of the friction and Coriolis terms and provide stable numerical integration. The treatment of the bathymetry source term is much more delicate. For cells undergoing a transient wet-dry process, the imposition of zero velocity stabilizes most of the approximations. However, this artificial zero velocity condition can be the cause of considerable error, especially when fast moving fronts are involved. Besides these difficulties linked with the internal position of the front within a cell versus the limited resolution of a numerical grid, it appears that the second derivative that defines whether the bed is locally convex or concave is a key indicator for stability. A convex bottom may lead to unbounded solutions. It appears that this behaviour is not linked to the numerics (numerical scheme) but rather to the mathematical theory of the SWWM. These concerns about stability have taken precedence, until now, over the crucial and related question of accuracy, especially near a moving front, and how these possible inaccuracies at the leading edge may affect the solution at interior points within the domain.This paper presents an in depth, fully two-dimensional space analysis of the aforementioned problem that has not been addressed before. The purpose of the present communication is not to propose what could be viewed as a final solution, but rather to provide some key considerations that may reveal the ingredients and insight necessary for the development of accurate and robust solutions in the future.
Harmonic-phase path-integral approximation of thermal quantum correlation functions
NASA Astrophysics Data System (ADS)
Robertson, Christopher; Habershon, Scott
2018-03-01
We present an approximation to the thermal symmetric form of the quantum time-correlation function in the standard position path-integral representation. By transforming to a sum-and-difference position representation and then Taylor-expanding the potential energy surface of the system to second order, the resulting expression provides a harmonic weighting function that approximately recovers the contribution of the phase to the time-correlation function. This method is readily implemented in a Monte Carlo sampling scheme and provides exact results for harmonic potentials (for both linear and non-linear operators) and near-quantitative results for anharmonic systems for low temperatures and times that are likely to be relevant to condensed phase experiments. This article focuses on one-dimensional examples to provide insights into convergence and sampling properties, and we also discuss how this approximation method may be extended to many-dimensional systems.
Variational Algorithms for Test Particle Trajectories
NASA Astrophysics Data System (ADS)
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
NASA Technical Reports Server (NTRS)
Navon, I. M.; Bloom, S.; Takacs, L. L.
1985-01-01
An attempt was made to use the GLAS global 4th order shallow water equations to perform a Machenhauer nonlinear normal mode initialization (NLNMI) for the external vertical mode. A new algorithm was defined for identifying and filtering out computational modes which affect the convergence of the Machenhauer iterative procedure. The computational modes and zonal waves were linearly initialized and gravitational modes were nonlinearly initialized. The Machenhauer NLNMI was insensitive to the absence of high zonal wave numbers. The effects of the Machenhauer scheme were evaluated by performing 24 hr integrations with nondissipative and dissipative explicit time integration models. The NLNMI was found to be inferior to the Rasch (1984) pseudo-secant technique for obtaining convergence when the time scales of nonlinear forcing were much smaller than the time scales expected from the natural frequency of the mode.
Taylor Series Trajectory Calculations Including Oblateness Effects and Variable Atmospheric Density
NASA Technical Reports Server (NTRS)
Scott, James R.
2011-01-01
Taylor series integration is implemented in NASA Glenn's Spacecraft N-body Analysis Program, and compared head-to-head with the code's existing 8th- order Runge-Kutta Fehlberg time integration scheme. This paper focuses on trajectory problems that include oblateness and/or variable atmospheric density. Taylor series is shown to be significantly faster and more accurate for oblateness problems up through a 4x4 field, with speedups ranging from a factor of 2 to 13. For problems with variable atmospheric density, speedups average 24 for atmospheric density alone, and average 1.6 to 8.2 when density and oblateness are combined.
Foretelling Flares and Solar Energetic Particle Events: the FORSPEF tool
NASA Astrophysics Data System (ADS)
Anastasiadis, Anastasios; Papaioannou, Athanasios; Sandberg, Ingmar; Georgoulis, Manolis K.; Tziotziou, Kostas; Jiggens, Piers
2017-04-01
A novel integrated prediction system, for both solar flares (SFs) and solar energetic particle (SEP) events is being presented. The Forecasting Solar Particle Events and Flares (FORSPEF) provides forecasting of solar eruptive events, such as SFs with a projection to coronal mass ejections (CMEs) (occurrence and velocity) and the likelihood of occurrence of a SEP event. In addition, FORSPEF, also provides nowcasting of SEP events based on actual SF and CME near real-time data, as well as the complete SEP profile (peak flux, fluence, rise time, duration) per parent solar event. The prediction of SFs relies on a morphological method: the effective connected magnetic field strength (Beff); it is based on an assessment of potentially flaring active-region (AR) magnetic configurations and it utilizes sophisticated analysis of a large number of AR magnetograms. For the prediction of SEP events new methods have been developed for both the likelihood of SEP occurrence and the expected SEP characteristics. In particular, using the location of the flare (longitude) and the flare size (maximum soft X-ray intensity), a reductive statistical method has been implemented. Moreover, employing CME parameters (velocity and width), proper functions per width (i.e. halo, partial halo, non-halo) and integral energy (E>30, 60, 100 MeV) have been identified. In our technique warnings are issued for all > C1.0 soft X-ray flares. The prediction time in the forecasting scheme extends to 24 hours with a refresh rate of 3 hours while the respective prediction time for the nowcasting scheme depends on the availability of the near real-time data and falls between 15-20 minutes for solar flares and 6 hours for CMEs. We present the modules of the FORSPEF system, their interconnection and the operational set up. The dual approach in the development of FORPSEF (i.e. forecasting and nowcasting scheme) permits the refinement of predictions upon the availability of new data that characterize changes on the Sun and the interplanetary space, while the combined usage of SF and SEP forecasting methods upgrades FORSPEF to an integrated forecasting solution. Finally, we demonstrate the validation of the modules of the FORSPEF tool using categorical scores constructed on archived data and we further discuss independent case studies. This work has been funded through the "FORSPEF: FORecasting Solar Particle Events and Flares", ESA Contract No. 4000109641/13/NL/AK and the "SPECS: Solar Particle Events foreCasting Studies" project of the National Observatory of Athens.
NASA Astrophysics Data System (ADS)
Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.
2011-09-01
Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.
Analysis of periodically excited non-linear systems by a parametric continuation technique
NASA Astrophysics Data System (ADS)
Padmanabhan, C.; Singh, R.
1995-07-01
The dynamic behavior and frequency response of harmonically excited piecewise linear and/or non-linear systems has been the subject of several recent investigations. Most of the prior studies employed harmonic balance or Galerkin schemes, piecewise linear techniques, analog simulation and/or direct numerical integration (digital simulation). Such techniques are somewhat limited in their ability to predict all of the dynamic characteristics, including bifurcations leading to the occurrence of unstable, subharmonic, quasi-periodic and/or chaotic solutions. To overcome this problem, a parametric continuation scheme, based on the shooting method, is applied specifically to a periodically excited piecewise linear/non-linear system, in order to improve understanding as well as to obtain the complete dynamic response. Parameter regions exhibiting bifurcations to harmonic, subharmonic or quasi-periodic solutions are obtained quite efficiently and systematically. Unlike other techniques, the proposed scheme can follow period-doubling bifurcations, and with some modifications obtain stable quasi-periodic solutions and their bifurcations. This knowledge is essential in establishing conditions for the occurrence of chaotic oscillations in any non-linear system. The method is first validated through the Duffing oscillator example, the solutions to which are also obtained by conventional one-term harmonic balance and perturbation methods. The second example deals with a clearance non-linearity problem for both harmonic and periodic excitations. Predictions from the proposed scheme match well with available analog simulation data as well as with multi-term harmonic balance results. Potential savings in computational time over direct numerical integration is demonstrated for some of the example cases. Also, this work has filled in some of the solution regimes for an impact pair, which were missed previously in the literature. Finally, one main limitation associated with the proposed procedure is discussed.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2012-08-01
A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.
On coarse projective integration for atomic deposition in amorphous systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chuang, Claire Y., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov; Sinno, Talid, E-mail: talid@seas.upenn.edu; Han, Sang M., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov
2015-10-07
Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of time scales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity, and computational efficiency. Coarse projective integration, an example application of the “equation-free” framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute time derivatives of slowly evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to themore » application of this technique in realistic settings is the “lifting” operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO{sub 2} substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO{sub 2} using only a few measures of the island size distribution. The approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less
High-Performance Integrated Control of water quality and quantity in urban water reservoirs
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.; Goedbloed, A.
2015-11-01
This paper contributes a novel High-Performance Integrated Control framework to support the real-time operation of urban water supply storages affected by water quality problems. We use a 3-D, high-fidelity simulation model to predict the main water quality dynamics and inform a real-time controller based on Model Predictive Control. The integration of the simulation model into the control scheme is performed by a model reduction process that identifies a low-order, dynamic emulator running 4 orders of magnitude faster. The model reduction, which relies on a semiautomatic procedural approach integrating time series clustering and variable selection algorithms, generates a compact and physically meaningful emulator that can be coupled with the controller. The framework is used to design the hourly operation of Marina Reservoir, a 3.2 Mm3 storm-water-fed reservoir located in the center of Singapore, operated for drinking water supply and flood control. Because of its recent formation from a former estuary, the reservoir suffers from high salinity levels, whose behavior is modeled with Delft3D-FLOW. Results show that our control framework reduces the minimum salinity levels by nearly 40% and cuts the average annual deficit of drinking water supply by about 2 times the active storage of the reservoir (about 4% of the total annual demand).
Real time implementation and control validation of the wind energy conversion system
NASA Astrophysics Data System (ADS)
Sattar, Adnan
The purpose of the thesis is to analyze dynamic and transient characteristics of wind energy conversion systems including the stability issues in real time environment using the Real Time Digital Simulator (RTDS). There are different power system simulation tools available in the market. Real time digital simulator (RTDS) is one of the powerful tools among those. RTDS simulator has a Graphical User Interface called RSCAD which contains detail component model library for both power system and control relevant analysis. The hardware is based upon the digital signal processors mounted in the racks. RTDS simulator has the advantage of interfacing the real world signals from the external devices, hence used to test the protection and control system equipments. Dynamic and transient characteristics of the fixed and variable speed wind turbine generating systems (WTGSs) are analyzed, in this thesis. Static Synchronous Compensator (STATCOM) as a flexible ac transmission system (FACTS) device is used to enhance the fault ride through (FRT) capability of the fixed speed wind farm. Two level voltage source converter based STATCOM is modeled in both VSC small time-step and VSC large time-step of RTDS. The simulation results of the RTDS model system are compared with the off-line EMTP software i.e. PSCAD/EMTDC. A new operational scheme for a MW class grid-connected variable speed wind turbine driven permanent magnet synchronous generator (VSWT-PMSG) is developed. VSWT-PMSG uses fully controlled frequency converters for the grid interfacing and thus have the ability to control the real and reactive powers simultaneously. Frequency converters are modeled in the VSC small time-step of the RTDS and three phase realistic grid is adopted with RSCAD simulation through the use of optical analogue digital converter (OADC) card of the RTDS. Steady state and LVRT characteristics are carried out to validate the proposed operational scheme. Simulation results show good agreement with real time simulation software and thus can be used to validate the controllers for the real time operation. Integration of the Battery Energy Storage System (BESS) with wind farm can smoothen its intermittent power fluctuations. The work also focuses on the real time implementation of the Sodium Sulfur (NaS) type BESS. BESS is integrated with the STATCOM. The main advantage of this system is that it can also provide the reactive power support to the system along with the real power exchange from BESS unit. BESS integrated with STATCOM is modeled in the VSC small time-step of the RTDS. The cascaded vector control scheme is used for the control of the STATCOM and suitable control is developed to control the charging/discharging of the NaS type BESS. Results are compared with Laboratory standard power system software PSCAD/EMTDC and the advantages of using RTDS in dynamic and transient characteristics analyses of wind farm are also demonstrated clearly.
Finite-dimensional modeling of network-induced delays for real-time control systems
NASA Technical Reports Server (NTRS)
Ray, Asok; Halevi, Yoram
1988-01-01
In integrated control systems (ICS), a feedback loop is closed by the common communication channel, which multiplexes digital data from the sensor to the controller and from the controller to the actuator along with the data traffic from other control loops and management functions. Due to asynchronous time-division multiplexing in the network access protocols, time-varying delays are introduced in the control loop, which degrade the system dynamic performance and are a potential source of instability. The delayed control system is represented by a finite-dimensional, time-varying, discrete-time model which is less complex than the existing continuous-time models for time-varying delays; this approach allows for simpler schemes for analysis and simulation of the ICS.
NASA Technical Reports Server (NTRS)
Kaushik, Dinesh K.; Baysal, Oktay
1997-01-01
Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.
Optical threshold secret sharing scheme based on basic vector operations and coherence superposition
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen
2015-04-01
We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.
NASA Technical Reports Server (NTRS)
Koch, Steven E.; Mcqueen, Jeffery T.
1987-01-01
A survey of various one- and two-way interactive nested grid techniques used in hydrostatic numerical weather prediction models is presented and the advantages and disadvantages of each method are discussed. The techniques for specifying the lateral boundary conditions for each nested grid scheme are described in detail. Averaging and interpolation techniques used when applying the coarse mesh grid (CMG) and fine mesh grid (FMG) interface conditions during two-way nesting are discussed separately. The survey shows that errors are commonly generated at the boundary between the CMG and FMG due to boundary formulation or specification discrepancies. Methods used to control this noise include application of smoothers, enhanced diffusion, or damping-type time integration schemes to model variables. The results from this survey provide the information needed to decide which one-way and two-way nested grid schemes merit future testing with the Mesoscale Atmospheric Simulation System (MASS) model. An analytically specified baroclinic wave will be used to conduct systematic tests of the chosen schemes since this will allow for objective determination of the interfacial noise in the kind of meteorological setting for which MASS is designed. Sample diagnostic plots from initial tests using the analytic wave are presented to illustrate how the model-generated noise is ascertained. These plots will be used to compare the accuracy of the various nesting schemes when incorporated into the MASS model.
Fast and efficient compression of floating-point data.
Lindstrom, Peter; Isenburg, Martin
2006-01-01
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.
Earing Prediction in Cup Drawing using the BBC2008 Yield Criterion
NASA Astrophysics Data System (ADS)
Vrh, Marko; Halilovič, Miroslav; Starman, Bojan; Štok, Boris; Comsa, Dan-Sorin; Banabic, Dorel
2011-08-01
The paper deals with constitutive modelling of highly anisotropic sheet metals. It presents FEM based earing predictions in cup drawing simulation of highly anisotropic aluminium alloys where more than four ears occur. For that purpose the BBC2008 yield criterion, which is a plane-stress yield criterion formulated in the form of a finite series, is used. Thus defined criterion can be expanded to retain more or less terms, depending on the amount of given experimental data. In order to use the model in sheet metal forming simulations we have implemented it in a general purpose finite element code ABAQUS/Explicit via VUMAT subroutine, considering alternatively eight or sixteen parameters (8p and 16p version). For the integration of the constitutive model the explicit NICE (Next Increment Corrects Error) integration scheme has been used. Due to the scheme effectiveness the CPU time consumption for a simulation is comparable to the time consumption of built-in constitutive models. Two aluminium alloys, namely AA5042-H2 and AA2090-T3, have been used for a validation of the model. For both alloys the parameters of the BBC2008 model have been identified with a developed numerical procedure, based on a minimization of the developed cost function. For both materials, the predictions of the BBC2008 model prove to be in very good agreement with the experimental results. The flexibility and the accuracy of the model together with the identification and integration procedure guarantee the applicability of the BBC2008 yield criterion in industrial applications.
Matsuoka, Takeshi; Tanaka, Shigenori; Ebina, Kuniyoshi
2014-03-01
We propose a hierarchical reduction scheme to cope with coupled rate equations that describe the dynamics of multi-time-scale photosynthetic reactions. To numerically solve nonlinear dynamical equations containing a wide temporal range of rate constants, we first study a prototypical three-variable model. Using a separation of the time scale of rate constants combined with identified slow variables as (quasi-)conserved quantities in the fast process, we achieve a coarse-graining of the dynamical equations reduced to those at a slower time scale. By iteratively employing this reduction method, the coarse-graining of broadly multi-scale dynamical equations can be performed in a hierarchical manner. We then apply this scheme to the reaction dynamics analysis of a simplified model for an illuminated photosystem II, which involves many processes of electron and excitation-energy transfers with a wide range of rate constants. We thus confirm a good agreement between the coarse-grained and fully (finely) integrated results for the population dynamics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Wei, Jianming; Zhang, Youan; Sun, Meimei; Geng, Baoliang
2017-09-01
This paper presents an adaptive iterative learning control scheme for a class of nonlinear systems with unknown time-varying delays and control direction preceded by unknown nonlinear backlash-like hysteresis. Boundary layer function is introduced to construct an auxiliary error variable, which relaxes the identical initial condition assumption of iterative learning control. For the controller design, integral Lyapunov function candidate is used, which avoids the possible singularity problem by introducing hyperbolic tangent funciton. After compensating for uncertainties with time-varying delays by combining appropriate Lyapunov-Krasovskii function with Young's inequality, an adaptive iterative learning control scheme is designed through neural approximation technique and Nussbaum function method. On the basis of the hyperbolic tangent function's characteristics, the system output is proved to converge to a small neighborhood of the desired trajectory by constructing Lyapunov-like composite energy function (CEF) in two cases, while keeping all the closed-loop signals bounded. Finally, a simulation example is presented to verify the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Investigation on navigation patterns of inertial/celestial integrated systems
NASA Astrophysics Data System (ADS)
Luo, Dacheng; Liu, Yan; Liu, Zhiguo; Jiao, Wei; Wang, Qiuyan
2014-11-01
It is known that Strapdown Inertial Navigation System (SINS), Global Navigation Satellite System (GNSS) and Celestial Navigation System (CNS) can complement each other's advantages. The SINS/CNS integrated system, which has the characteristics of strong autonomy, high accuracy and good anti-jamming, is widely used in military and civilian applications. Similar to SINS/GNSS integrated system, the SINS/CNS integrated system can also be divided into three kinds according to the difference of integrating depth, i.e., loosely coupled pattern, tightly coupled pattern and deeply coupled pattern. In this paper, the principle and characteristics of each pattern of SINS/CNS system are analyzed. Based on the comparison of these patterns, a novel deeply coupled SINS/CNS integrated navigation scheme is proposed. The innovation of this scheme is that a new star pattern matching method aided by SINS information is put forward. Thus the complementary features of these two subsystems are reflected.
Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo
2012-12-01
In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.
Computational aspects of the nonlinear normal mode initialization of the GLAS 4th order GCM
NASA Technical Reports Server (NTRS)
Navon, I. M.; Bloom, S. C.; Takacs, L.
1984-01-01
Using the normal modes of the GLAS 4th Order Model, a Machenhauer nonlinear normal mode initialization (NLNMI) was carried out for the external vertical mode using the GLAS 4th Order shallow water equations model for an equivalent depth corresponding to that associated with the external vertical mode. A simple procedure was devised which was directed at identifying computational modes by following the rate of increase of BAL sub M, the partial (with respect to the zonal wavenumber m) sum of squares of the time change of the normal mode coefficients (for fixed vertical mode index) varying over the latitude index L of symmetric or antisymmetric gravity waves. A working algorithm is presented which speeds up the convergence of the iterative Machenhauer NLNMI. A 24 h integration using the NLNMI state was carried out using both Matsuno and leap-frog time-integration schemes; these runs were then compared to a 24 h integration starting from a non-initialized state. The maximal impact of the nonlinear normal mode initialization was found to occur 6-10 hours after the initial time.
Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique
2011-05-01
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
Integrated response toward HIV: a health promotion case study from China.
Jiang, Zhen; Wang, Debin; Yang, Sen; Duan, Mingyue; Bu, Pengbin; Green, Andrew; Zhang, Xuejun
2011-06-01
Integrated HIV response refers to a formalized, collaborative process among organizations in communities with HIV at-risk populations. It is a both comprehensive and flexible scheme, which may include community-based environment promotion, skill coalition, fund linkage, human resource collaboration and service system jointly for both HIV prevention and control. It enables decisions and actions responds over time. In 1997, the Chinese government developed a 10-year HIV project supported by World Bank Loan (H9-HIV/AIDS/STIs). It was the first integrated STI/HIV intervention project in China and provides a unique opportunity to explore the long-term comprehensive STI/HIV intervention in a low-middle income country setting. Significant outcomes were identified as development and promotion of the national strategic plan and its ongoing implementation; positive knowledge, behavioral and STI/HIV prevalence rate change; and valuable experiences for managing integrated HIV/STI intervention projects. Essential factors for the success of the project and the key tasks for the next step were identified and included well-designed intervention in rural and low economic regions, unified program evaluation framework and real-time information collection and assessment.
[Intelligent watch system for health monitoring based on Bluetooth low energy technology].
Wang, Ji; Guo, Hailiang; Ren, Xiaoli
2017-08-01
According to the development status of wearable technology and the demand of intelligent health monitoring, we studied the multi-function integrated smart watches solution and its key technology. First of all, the sensor technology with high integration density, Bluetooth low energy (BLE) and mobile communication technology were integrated and used in develop practice. Secondly, for the hardware design of the system in this paper, we chose the scheme with high integration density and cost-effective computer modules and chips. Thirdly, we used real-time operating system FreeRTOS to develop the friendly graphical interface interacting with touch screen. At last, the high-performance application software which connected with BLE hardware wirelessly and synchronized data was developed based on android system. The function of this system included real-time calendar clock, telephone message, address book management, step-counting, heart rate and sleep quality monitoring and so on. Experiments showed that the collecting data accuracy of various sensors, system data transmission capacity, the overall power consumption satisfy the production standard. Moreover, the system run stably with low power consumption, which could realize intelligent health monitoring effectively.
Horizontal vectorization of electron repulsion integrals.
Pritchard, Benjamin P; Chow, Edmond
2016-10-30
We present an efficient implementation of the Obara-Saika algorithm for the computation of electron repulsion integrals that utilizes vector intrinsics to calculate several primitive integrals concurrently in a SIMD vector. Initial benchmarks display a 2-4 times speedup with AVX instructions over comparable scalar code, depending on the basis set. Speedup over scalar code is found to be sensitive to the level of contraction of the basis set, and is best for (lAlB|lClD) quartets when lD = 0 or lB=lD=0, which makes such a vectorization scheme particularly suitable for density fitting. The basic Obara-Saika algorithm, how it is vectorized, and the performance bottlenecks are analyzed and discussed. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Computations of Flow over a Hump Model Using Higher Order Method with Turbulence Modeling
NASA Technical Reports Server (NTRS)
Balakumar, P.
2005-01-01
Turbulent separated flow over a two-dimensional hump is computed by solving the RANS equations with k - omega (SST) turbulence model for the baseline, steady suction and oscillatory blowing/suction flow control cases. The flow equations and the turbulent model equations are solved using a fifth-order accurate weighted essentially. nonoscillatory (WENO) scheme for space discretization and a third order, total variation diminishing (TVD) Runge-Kutta scheme for time integration. Qualitatively the computed pressure distributions exhibit the same behavior as those observed in the experiments. The computed separation regions are much longer than those observed experimentally. However, the percentage reduction in the separation region in the steady suction case is closer to what was measured in the experiment. The computations did not predict the expected reduction in the separation length in the oscillatory case. The predicted turbulent quantities are two to three times smaller than the measured values pointing towards the deficiencies in the existing turbulent models when they are applied to strong steady/unsteady separated flows.
NASA Astrophysics Data System (ADS)
Miyaji, Kousuke; Sun, Chao; Soga, Ayumi; Takeuchi, Ken
2014-01-01
A relational database management system (RDBMS) is designed based on NAND flash solid-state drive (SSD) for storage. By vertically integrating the storage engine (SE) and the flash translation layer (FTL), system performance is maximized and the internal SSD overhead is minimized. The proposed RDBMS SE utilizes physical information about the NAND flash memory which is supplied from the FTL. The query operation is also optimized for SSD. By these treatments, page-copy-less garbage collection is achieved and data fragmentation in the NAND flash memory is suppressed. As a result, RDBMS performance increases by 3.8 times, power consumption of SSD decreases by 46% and SSD life time is increased by 61%. The effectiveness of the proposed scheme increases with larger erase block sizes, which matches the future scaling trend of three-dimensional (3D-) NAND flash memories. The preferable row data size of the proposed scheme is below 500 byte for 16 kbyte page size.
NASA Astrophysics Data System (ADS)
Navas, Pedro; Sanavia, Lorenzo; López-Querol, Susana; Yu, Rena C.
2017-12-01
Solving dynamic problems for fluid saturated porous media at large deformation regime is an interesting but complex issue. An implicit time integration scheme is herein developed within the framework of the u-w (solid displacement-relative fluid displacement) formulation for the Biot's equations. In particular, liquid water saturated porous media is considered and the linearization of the linear momentum equations taking into account all the inertia terms for both solid and fluid phases is for the first time presented. The spatial discretization is carried out through a meshfree method, in which the shape functions are based on the principle of local maximum entropy LME. The current methodology is firstly validated with the dynamic consolidation of a soil column and the plastic shear band formulation of a square domain loaded by a rigid footing. The feasibility of this new numerical approach for solving large deformation dynamic problems is finally demonstrated through the application to an embankment problem subjected to an earthquake.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
NASA Astrophysics Data System (ADS)
Navon, I. M.; Yu, Jian
A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
NASA Astrophysics Data System (ADS)
Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong
2017-10-01
This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.
Tamura, Hiroyuki
2016-11-23
Intermolecular exciton transfers and related conical intersections are analyzed by diabatization for time-dependent density functional theory. The diabatic states are expressed as a linear combination of the adiabatic states so as to emulate the well-defined reference states. The singlet exciton coupling calculated by the diabatization scheme includes contributions from the Coulomb (Förster) and electron exchange (Dexter) couplings. For triplet exciton transfers, the Dexter coupling, charge transfer integral, and diabatic potentials of stacked molecules are calculated for analyzing direct and superexchange pathways. We discuss some topologies of molecular aggregates that induce conical intersections on the vanishing points of the exciton coupling, namely boundary of H- and J-aggregates and T-shape aggregates, as well as canceled exciton coupling to the bright state of H-aggregate, i.e., selective exciton transfer to the dark state. The diabatization scheme automatically accounts for the Berry phase by fixing the signs of reference states while scanning the coordinates.
ATTDES: An Expert System for Satellite Attitude Determination and Control. 2
NASA Technical Reports Server (NTRS)
Mackison, Donald L.; Gifford, Kevin
1996-01-01
The design, analysis, and flight operations of satellite attitude determintion and attitude control systems require extensive mathematical formulations, optimization studies, and computer simulation. This is best done by an analyst with extensive education and experience. The development of programs such as ATTDES permit the use of advanced techniques by those with less experience. Typical tasks include the mission analysis to select stabilization and damping schemes, attitude determination sensors and algorithms, and control system designs to meet program requirements. ATTDES is a system that includes all of these activities, including high fidelity orbit environment models that can be used for preliminary analysis, parameter selection, stabilization schemes, the development of estimators covariance analyses, and optimization, and can support ongoing orbit activities. The modification of existing simulations to model new configurations for these purposes can be an expensive, time consuming activity that becomes a pacing item in the development and operation of such new systems. The use of an integrated tool such as ATTDES significantly reduces the effort and time required for these tasks.
Modern control concepts in hydrology
NASA Technical Reports Server (NTRS)
Duong, N.; Johnson, G. R.; Winn, C. B.
1974-01-01
Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1994-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. Roe's approximate Riemann solution scheme or the computationally less expensive advection upstream splitting method (AUSM) flux-splitting scheme is used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passages and the distribution of flow variables in the stationary inlet port region.
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1993-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. The Roe approximate Riemann solution scheme or the computationally less expensive Advection Upstream Splitting Method (AUSM) flux-splitting scheme are used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passage and the distribution of flow variables in the stationary inlet port region.
NASA Astrophysics Data System (ADS)
Kim, Tae-Wook; Park, Sang-Gyu; Choi, Byong-Deok
2011-03-01
The previous pixel-level digital-to-analog-conversion (DAC) scheme that implements a part of a DAC in a pixel circuit turned out to be very efficient for reducing the peripheral area of an integrated data driver fabricated with low-temperature polycrystalline silicon thin-film transistors (LTPS TFTs). However, how the pixel-level DAC can be compatible with the existing pixel circuits including compensation schemes of TFT variations and IR drops on supply rails, which is of primary importance for active matrix organic light emitting diodes (AMOLEDs) is an issue in this scheme, because LTPS TFTs suffer from random variations in their characteristics. In this paper, we show that the pixel-level DAC scheme can be successfully used with the previous compensation schemes by giving two examples of voltage- and current-programming pixels. The previous pixel-level DAC schemes require additional two TFTs and one capacitor, but for these newly proposed pixel circuits, the overhead is no more than two TFTs by utilizing the already existing capacitor. In addition, through a detailed analysis, it has been shown that the pixel-level DAC can be expanded to a 4-bit resolution, or be applied together with 1:2 demultiplexing driving for 6- to 8-in. diagonal XGA AMOLED display panels.
NASA Technical Reports Server (NTRS)
Arakawa, A.; Lamb, V. R.
1979-01-01
A three-dimensional finite difference scheme for the solution of the shallow water momentum equations which accounts for the conservation of potential enstrophy in the flow of a homogeneous incompressible shallow atmosphere over steep topography as well as for total energy conservation is presented. The scheme is derived to be consistent with a reasonable scheme for potential vorticity advection in a long-term integration for a general flow with divergent mass flux. Numerical comparisons of the characteristics of the present potential enstrophy-conserving scheme with those of a scheme that conserves potential enstrophy only for purely horizontal nondivergent flow are presented which demonstrate the reduction of computational noise in the wind field with the enstrophy-conserving scheme and its convergence even in relatively coarse grids.
Control of Vacuum Induction Brazing System for Sealing of Instrumentation Feedthrough
NASA Astrophysics Data System (ADS)
Ahn, Sung Ho; Hong, Jintae; Joung, Chang Young; Heo, Sung Ho
2017-04-01
The integrity of instrumentation cables is an important performance parameter in the brazing process, along with the sealing performance. In this paper, an accurate control scheme for brazing of the instrumentation feedthrough in a vacuum induction brazing system was developed. The experimental results show that the accurate brazing temperature control performance is achieved by the developed control scheme. It is demonstrated that the sealing performances of the instrumentation feedthrough and the integrity of the instrumentation cables are to be acceptable after brazing.
Cubic scaling algorithms for RPA correlation using interpolative separable density fitting
NASA Astrophysics Data System (ADS)
Lu, Jianfeng; Thicke, Kyle
2017-12-01
We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.
Digital computer program for generating dynamic turbofan engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.; Szuch, J. R.; Westerkamp, E. J.
1983-01-01
This report describes DIGTEM, a digital computer program that simulates two spool, two-stream turbofan engines. The turbofan engine model in DIGTEM contains steady-state performance maps for all of the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. Altogether there are 16 state variables and state equations. DIGTEM features a backward-differnce integration scheme for integrating stiff systems. It trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off-design points and iterates to a balanced engine condition. Transients can also be run. They are generated by defining controls as a function of time (open-loop control) in a user-written subroutine (TMRSP). DIGTEM has run on the IBM 370/3033 computer using implicit integration with time steps ranging from 1.0 msec to 1.0 sec. DIGTEM is generalized in the aerothermodynamic treatment of components.
NASA Astrophysics Data System (ADS)
Huang, Zhiwei; Bergholt, Mads Sylvest; Zheng, Wei; Ho, Khek Yu; Yeoh, Khay Guan; Teh, Ming; So, Jimmy Bok Yan; Shabbir, Asim
2013-03-01
A rapid image-guided Raman endoscopy system integrated with on-line diagnostic scheme is developed for in vivo Raman tissue diagnosis (optical biopsy) in the upper GI during clinical gastrointestinal endoscopy under multimodal wide-field imaging guidance. The real-time Raman endoscopy technique was tested prospectively on new gastric patients (n=4) and could identify dysplasia in vivo with sensitivity of 81.5% (22/27) and specificity of 87.9% (29/33). This study realizes for the first time the novel image-guided Raman endoscopy as a screening tool for real-time, online diagnosis of gastric cancer and precancer in vivo at endoscopy.
Compression-based integral curve data reuse framework for flow visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Fan; Bi, Chongke; Guo, Hanqi
Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less
On-board closed-loop congestion control for satellite based packet switching networks
NASA Technical Reports Server (NTRS)
Chu, Pong P.; Ivancic, William D.; Kim, Heechul
1993-01-01
NASA LeRC is currently investigating a satellite architecture that incorporates on-board packet switching capability. Because of the statistical nature of packet switching, arrival traffic may fluctuate and thus it is necessary to integrate congestion control mechanism as part of the on-board processing unit. This study focuses on the closed-loop reactive control. We investigate the impact of the long propagation delay on the performance and propose a scheme to overcome the problem. The scheme uses a global feedback signal to regulate the packet arrival rate of ground stations. In this scheme, the satellite continuously broadcasts the status of its output buffer and the ground stations respond by selectively discarding packets or by tagging the excessive packets as low-priority. The two schemes are evaluated by theoretical queuing analysis and simulation. The former is used to analyze the simplified model and to determine the basic trends and bounds, and the later is used to assess the performance of a more realistic system and to evaluate the effectiveness of more sophisticated control schemes. The results show that the long propagation delay makes the closed-loop congestion control less responsive. The broadcasted information can only be used to extract statistical information. The discarding scheme needs carefully-chosen status information and reduction function, and normally requires a significant amount of ground discarding to reduce the on-board packet loss probability. The tagging scheme is more effective since it tolerates more uncertainties and allows a larger margin of error in status information. It can protect the high-priority packets from excessive loss and fully utilize the downlink bandwidth at the same time.
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2016-07-01
In this work, an arbitrary order HLL-type numerical scheme is constructed using the flux-ADER methodology. The proposed scheme is based on an augmented Derivative Riemann solver that was used for the first time in Navas-Montilla and Murillo (2015) [1]. Such solver, hereafter referred to as Flux-Source (FS) solver, was conceived as a high order extension of the augmented Roe solver and led to the generation of a novel numerical scheme called AR-ADER scheme. Here, we provide a general definition of the FS solver independently of the Riemann solver used in it. Moreover, a simplified version of the solver, referred to as Linearized-Flux-Source (LFS) solver, is presented. This novel version of the FS solver allows to compute the solution without requiring reconstruction of derivatives of the fluxes, nevertheless some drawbacks are evidenced. In contrast to other previously defined Derivative Riemann solvers, the proposed FS and LFS solvers take into account the presence of the source term in the resolution of the Derivative Riemann Problem (DRP), which is of particular interest when dealing with geometric source terms. When applied to the shallow water equations, the proposed HLLS-ADER and AR-ADER schemes can be constructed to fulfill the exactly well-balanced property, showing that an arbitrary quadrature of the integral of the source inside the cell does not ensure energy balanced solutions. As a result of this work, energy balanced flux-ADER schemes that provide the exact solution for steady cases and that converge to the exact solution with arbitrary order for transient cases are constructed.
Entropy Splitting for High Order Numerical Simulation of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Sandham, N. D.; Yee, H. C.; Kwak, Dochan (Technical Monitor)
2000-01-01
A stable high order numerical scheme for direct numerical simulation (DNS) of shock-free compressible turbulence is presented. The method is applicable to general geometries. It contains no upwinding, artificial dissipation, or filtering. Instead the method relies on the stabilizing mechanisms of an appropriate conditioning of the governing equations and the use of compatible spatial difference operators for the interior points (interior scheme) as well as the boundary points (boundary scheme). An entropy splitting approach splits the inviscid flux derivatives into conservative and non-conservative portions. The spatial difference operators satisfy a summation by parts condition leading to a stable scheme (combined interior and boundary schemes) for the initial boundary value problem using a generalized energy estimate. A Laplacian formulation of the viscous and heat conduction terms on the right hand side of the Navier-Stokes equations is used to ensure that any tendency to odd-even decoupling associated with central schemes can be countered by the fluid viscosity. A special formulation of the continuity equation is used, based on similar arguments. The resulting methods are able to minimize spurious high frequency oscillation producing nonlinear instability associated with pure central schemes, especially for long time integration simulation such as DNS. For validation purposes, the methods are tested in a DNS of compressible turbulent plane channel flow at a friction Mach number of 0.1 where a very accurate turbulence data base exists. It is demonstrated that the methods are robust in terms of grid resolution, and in good agreement with incompressible channel data, as expected at this Mach number. Accurate turbulence statistics can be obtained with moderate grid sizes. Stability limits on the range of the splitting parameter are determined from numerical tests.
An Implicit Characteristic Based Method for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.
Method and apparatus for conversion of carbonaceous materials to liquid fuel
Lux, Kenneth W.; Namazian, Mehdi; Kelly, John T.
2015-12-01
Embodiments of the invention relates to conversion of hydrocarbon material including but not limited to coal and biomass to a synthetic liquid transportation fuel. The invention includes the integration of a non-catalytic first reaction scheme, which converts carbonaceous materials into a solid product that includes char and ash and a gaseous product; a non-catalytic second reaction scheme, which converts a portion of the gaseous product from the first reaction scheme to light olefins and liquid byproducts; a traditional gas-cleanup operations; and the third reaction scheme to combine the olefins from the second reaction scheme to produce a targeted fuel like liquid transportation fuels.
Real-time path planning and autonomous control for helicopter autorotation
NASA Astrophysics Data System (ADS)
Yomchinda, Thanan
Autorotation is a descending maneuver that can be used to recover helicopters in the event of total loss of engine power; however it is an extremely difficult and complex maneuver. The objective of this work is to develop a real-time system which provides full autonomous control for autorotation landing of helicopters. The work includes the development of an autorotation path planning method and integration of the path planner with a primary flight control system. The trajectory is divided into three parts: entry, descent and flare. Three different optimization algorithms are used to generate trajectories for each of these segments. The primary flight control is designed using a linear dynamic inversion control scheme, and a path following control law is developed to track the autorotation trajectories. Details of the path planning algorithm, trajectory following control law, and autonomous autorotation system implementation are presented. The integrated system is demonstrated in real-time high fidelity simulations. Results indicate feasibility of the capability of the algorithms to operate in real-time and of the integrated systems ability to provide safe autorotation landings. Preliminary simulations of autonomous autorotation on a small UAV are presented which will lead to a final hardware demonstration of the algorithms.
Integral equation methods for vesicle electrohydrodynamics in three dimensions
NASA Astrophysics Data System (ADS)
Veerapaneni, Shravan
2016-12-01
In this paper, we develop a new boundary integral equation formulation that describes the coupled electro- and hydro-dynamics of a vesicle suspended in a viscous fluid and subjected to external flow and electric fields. The dynamics of the vesicle are characterized by a competition between the elastic, electric and viscous forces on its membrane. The classical Taylor-Melcher leaky-dielectric model is employed for the electric response of the vesicle and the Helfrich energy model combined with local inextensibility is employed for its elastic response. The coupled governing equations for the vesicle position and its transmembrane electric potential are solved using a numerical method that is spectrally accurate in space and first-order in time. The method uses a semi-implicit time-stepping scheme to overcome the numerical stiffness associated with the governing equations.
Disentangling Complexity in Bayesian Automatic Adaptive Quadrature
NASA Astrophysics Data System (ADS)
Adam, Gheorghe; Adam, Sanda
2018-02-01
The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.
A cache-aided multiprocessor rollback recovery scheme
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent
1989-01-01
This paper demonstrates how previous uniprocessor cache-aided recovery schemes can be applied to multiprocessor architectures, for recovering from transient processor failures, utilizing private caches and a global shared memory. As with cache-aided uniprocessor recovery, the multiprocessor cache-aided recovery scheme of this paper can be easily integrated into standard bus-based snoopy cache coherence protocols. A consistent shared memory state is maintained without the necessity of global check-pointing.
Assimilation of gridded terrestrial water storage observations from GRACE into a land surface model
NASA Astrophysics Data System (ADS)
Girotto, Manuela; De Lannoy, Gabriëlle J. M.; Reichle, Rolf H.; Rodell, Matthew
2016-05-01
Observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) satellite mission have a coarse resolution in time (monthly) and space (roughly 150,000 km2 at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This work proposes a variant of existing ensemble-based GRACE-TWS data assimilation schemes. The new algorithm differs in how the analysis increments are computed and applied. Existing schemes correlate the uncertainty in the modeled monthly TWS estimates with errors in the soil moisture profile state variables at a single instant in the month and then apply the increment either at the end of the month or gradually throughout the month. The proposed new scheme first computes increments for each day of the month and then applies the average of those increments at the beginning of the month. The new scheme therefore better reflects submonthly variations in TWS errors. The new and existing schemes are investigated here using gridded GRACE-TWS observations. The assimilation results are validated at the monthly time scale, using in situ measurements of groundwater depth and soil moisture across the U.S. The new assimilation scheme yields improved (although not in a statistically significant sense) skill metrics for groundwater compared to the open-loop (no assimilation) simulations and compared to the existing assimilation schemes. A smaller impact is seen for surface and root-zone soil moisture, which have a shorter memory and receive smaller increments from TWS assimilation than groundwater. These results motivate future efforts to combine GRACE-TWS observations with observations that are more sensitive to surface soil moisture, such as L-band brightness temperature observations from Soil Moisture Ocean Salinity (SMOS) or Soil Moisture Active Passive (SMAP). Finally, we demonstrate that the scaling parameters that are applied to the GRACE observations prior to assimilation should be consistent with the land surface model that is used within the assimilation system.
Assimilation of Gridded Terrestrial Water Storage Observations from GRACE into a Land Surface Model
NASA Technical Reports Server (NTRS)
Girotto, Manuela; De Lannoy, Gabrielle J. M.; Reichle, Rolf H.; Rodell, Matthew
2016-01-01
Observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) satellite mission have a coarse resolution in time (monthly) and space (roughly 150,000 km(sup 2) at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This work proposes a variant of existing ensemble-based GRACE-TWS data assimilation schemes. The new algorithm differs in how the analysis increments are computed and applied. Existing schemes correlate the uncertainty in the modeled monthly TWS estimates with errors in the soil moisture profile state variables at a single instant in the month and then apply the increment either at the end of the month or gradually throughout the month. The proposed new scheme first computes increments for each day of the month and then applies the average of those increments at the beginning of the month. The new scheme therefore better reflects submonthly variations in TWS errors. The new and existing schemes are investigated here using gridded GRACE-TWS observations. The assimilation results are validated at the monthly time scale, using in situ measurements of groundwater depth and soil moisture across the U.S. The new assimilation scheme yields improved (although not in a statistically significant sense) skill metrics for groundwater compared to the open-loop (no assimilation) simulations and compared to the existing assimilation schemes. A smaller impact is seen for surface and root-zone soil moisture, which have a shorter memory and receive smaller increments from TWS assimilation than groundwater. These results motivate future efforts to combine GRACE-TWS observations with observations that are more sensitive to surface soil moisture, such as L-band brightness temperature observations from Soil Moisture Ocean Salinity (SMOS) or Soil Moisture Active Passive (SMAP). Finally, we demonstrate that the scaling parameters that are applied to the GRACE observations prior to assimilation should be consistent with the land surface model that is used within the assimilation system.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2014-10-01
Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.
Adaptive Packet Combining Scheme in Three State Channel Model
NASA Astrophysics Data System (ADS)
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
Prokudin, Alexei; Sun, Peng; Yuan, Feng
2015-10-01
Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokudin, Alexei; Sun, Peng; Yuan, Feng
Following an earlier derivation by Catani-de Florian-Grazzini (2000) on the scheme dependence in the Collins-Soper- Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. Thus, we further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are remarkablymore » consistent with each other and with that of the standard CSS formalism.« less
NASA Astrophysics Data System (ADS)
Prokudin, Alexei; Sun, Peng; Yuan, Feng
2015-11-01
Following an earlier derivation by Catani, de Florian and Grazzini (2000) on the scheme dependence in the Collins-Soper-Sterman (CSS) resummation formalism in hard scattering processes, we investigate the scheme dependence of the Transverse Momentum Distributions (TMDs) and their applications. By adopting a universal C-coefficient function associated with the integrated parton distributions, the difference between various TMD schemes can be attributed to a perturbative calculable function depending on the hard momentum scale. We further apply several TMD schemes to the Drell-Yan process of lepton pair production in hadronic collisions, and find that the constrained non-perturbative form factors in different schemes are consistent with each other and with that of the standard CSS formalism.
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
Information flow in an atmospheric model and data assimilation
NASA Astrophysics Data System (ADS)
Yoon, Young-noh
2011-12-01
Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background state estimate with new observations, and the cycle repeats. In an ensemble Kalman filter, the probability distribution of the state estimate is represented by an ensemble of sample states, and the covariance matrix is calculated using the ensemble of sample states. We perform numerical experiments on toy atmospheric models introduced by Lorenz in 2005 to study the information flow in an atmospheric model in conjunction with ensemble Kalman filtering for data assimilation. This dissertation consists of two parts. The first part of this dissertation is about the propagation of information and the use of localization in ensemble Kalman filtering. If we can perform data assimilation locally by considering the observations and the state variables only near each grid point, then we can reduce the number of ensemble members necessary to cover the probability distribution of the state estimate, reducing the computational cost for the data assimilation and the model integration. Several localized versions of the ensemble Kalman filter have been proposed. Although tests applying such schemes have proven them to be extremely promising, a full basic understanding of the rationale and limitations of localization is currently lacking. We address these issues and elucidate the role played by chaotic wave dynamics in the propagation of information and the resulting impact on forecasts. The second part of this dissertation is about ensemble regional data assimilation using joint states. Assuming that we have a global model and a regional model of higher accuracy defined in a subregion inside the global region, we propose a data assimilation scheme that produces the analyses for the global and the regional model simultaneously, considering forecast information from both models. We show that our new data assimilation scheme produces better results both in the subregion and the global region than the data assimilation scheme that produces the analyses for the global and the regional model separately.
A New Mirroring Circuit for Power MOS Current Sensing Highly Immune to EMI
Aiello, Orazio; Fiori, Franco
2013-01-01
This paper deals with the monitoring of power transistor current subjected to radio-frequency interference. In particular, a new current sensor with no connection to the power transistor drain and with improved performance with respect to the existing current-sensing schemes is presented. The operation of the above mentioned current sensor is discussed referring to time-domain computer simulations. The susceptibility of the proposed circuit to radio-frequency interference is evaluated through time-domain computer simulations and the results are compared with those obtained for a conventional integrated current sensor. PMID:23385408
Sasaki, Akira; Kojo, Masashi; Hirose, Kikuji; Goto, Hidekazu
2011-11-02
The path-integral renormalization group and direct energy minimization method of practical first-principles electronic structure calculations for multi-body systems within the framework of the real-space finite-difference scheme are introduced. These two methods can handle higher dimensional systems with consideration of the correlation effect. Furthermore, they can be easily extended to the multicomponent quantum systems which contain more than two kinds of quantum particles. The key to the present methods is employing linear combinations of nonorthogonal Slater determinants (SDs) as multi-body wavefunctions. As one of the noticeable results, the same accuracy as the variational Monte Carlo method is achieved with a few SDs. This enables us to study the entire ground state consisting of electrons and nuclei without the need to use the Born-Oppenheimer approximation. Recent activities on methodological developments aiming towards practical calculations such as the implementation of auxiliary field for Coulombic interaction, the treatment of the kinetic operator in imaginary-time evolutions, the time-saving double-grid technique for bare-Coulomb atomic potentials and the optimization scheme for minimizing the total-energy functional are also introduced. As test examples, the total energy of the hydrogen molecule, the atomic configuration of the methylene and the electronic structures of two-dimensional quantum dots are calculated, and the accuracy, availability and possibility of the present methods are demonstrated.
NASA Astrophysics Data System (ADS)
Nielsen, M.; Elezzabi, A. Y.
2013-03-01
To become a competitor to replace CMOS-electronics for next-generation data processing, signal routing, and computing, nanoplasmonic circuits will require an analogue to electrical vias in order to enable vertical connections between device layers. Vertically stacked nanoplasmonic nanoring resonators formed of Ag/Si/Ag gap plasmon waveguides were studied as a novel 3-D coupling scheme that could be monolithically integrated on a silicon platform. The vertically coupled ring resonators were evanescently coupled to 100 nm x 100 nm Ag/Si/Ag input and output waveguides and the whole device was submerged in silicon dioxide. 3-D finite difference time domain simulations were used to examine the transmission spectra of the coupling device with varying device sizes and orientations. By having the signal coupling occur over multiple trips around the resonator, coupling efficiencies as high as 39% at telecommunication wavelengths between adjacent layers were present with planar device areas of only 1.00 μm2. As the vertical signal transfer was based on coupled ring resonators, the signal transfer was inherently wavelength dependent. Changing the device size by varying the radii of the nanorings allowed for tailoring the coupled frequency spectra. The plasmonic resonator based coupling scheme was found to have quality (Q) factors of upwards of 30 at telecommunication wavelengths. By allowing different device layers to operate on different wavelengths, this coupling scheme could to lead to parallel processing in stacked independent device layers.
Symmetric weak ternary quantum homomorphic encryption schemes
NASA Astrophysics Data System (ADS)
Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao
2016-03-01
Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.
Galerkin v. discrete-optimal projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less